Skip to main content

Modified Newton-type methods for the NCP by using a class of one-parametric NCP-functions

Abstract

In this paper, we propose a new Newton-type method for solving the nonlinear complementarity problem (NCP) based on a class of one-parametric NCP-functions, where an approximate Newton direction can be obtained by solving a modified Newton equation in each iteration. The method is shown to be globally convergent without any additional assumption. To investigate the fast convergence of this class of methods, we propose a modified version of the proposed method and show the method is globally and locally superlinearly convergent. The preliminary numerical results show the effectiveness of the modified method.

1 Introduction

Consider the nonlinear complementarity problem NCP(F)

x0,F(x)0, x T F(x)=0,

where F: R n R n is a continuously differentiable function. We assume that F is a P 0 -function throughout this paper. It is well known that NCP(F) can be reformulated as a system of nonsmooth equations, where the so-called NCP-function plays an important role in this class of methods.

Definition 1 A function ϕ: R 2 R is called an NCP-function if it satisfies

ϕ(a,b)=0a0,b0,ab=0.

Over the past two decades, a variety of NCP-functions have been studied (see, for example, [17]). Among them, a popular NCP-function is the well-known Fischer-Burmeister NCP-function [3] defined as

ϕ FB (a,b)= a 2 + b 2 ab.

In this paper, we use a family of NCP-functions based on the FB function, which was introduced by Kanzow and Kleinmichel [6],

ϕ λ (a,b)= ( a b ) 2 + λ a b ab,
(1)

where λ is a fixed parameter such that λ(0,4). In the case of λ=2, the NCP-function ϕ λ obviously reduces to the Fischer-Burmeister function.

By using ϕ λ defined by (1), the NCP is equivalent to a system of nonsmooth equations

Φ λ (x)= [ ϕ λ ( x 1 , F 1 ( x ) ) ϕ λ ( x n , F n ( x ) ) ] =0.

Let θ λ (x)= 1 2 Φ λ ( x ) 2 . Then solving NCP(F) is equivalent to solving the unconstrained minimization min x R n θ λ (x) with the optimal value 0.

Kanzow and Kleinmichel [6] studied the properties of Φ λ and θ λ and proposed the corresponding semismooth Newton method. Their method first attempted to use the Newton direction, but if the Newton equation is unsolvable or the Newton direction is not a direction of sufficient decrease for θ λ , then it switches to the steepest descent direction. In this paper, we propose a Newton-type method for the P 0 -NCP(F), where, in each iteration, we need to construct an approximation of Φ λ (x) (the Clarke subdifferential of Φ λ at x, which is defined in the next section), which is nonsingular, and hence the direction-finding problem can be solved only by solving a system of perturbed Newton equations. We show that the proposed method is globally convergent without any additional assumption. The proposed method is similar to the one discussed by Yamashita and Fukushima [8], where the NCP-function ϕ FB was used. Since ϕ FB is a special case of ϕ λ , the proposed method can be used more widely. However, it is hard for us to discuss the locally fast convergence of the proposed method. In order to investigate the locally fast convergence of this class of methods, we revise the proposed method. We show that the modified method is globally and locally superlinearly convergent. The preliminary numerical results show the effectiveness of the modified method.

2 Preliminaries

In this section, we recall some basic concepts and known results.

Definition 2 F: R n R n is called a P 0 -function if

max 1 i n x i y i ( x i y i ) ( F i ( x ) F i ( y ) ) 0,x,y R n ,xy.

Definition 3 A matrix M R n × n is a P 0 -matrix if each of its principal minors is nonnegative.

It is known that the Jacobian of every continuously differentiable P 0 -function is a P 0 -matrix. The following theorem will play an important role in our analysis. Notice that, for a vector a, D a denotes the diagonal matrix with the i th diagonal element being a i .

Theorem 4 (see [8])

Let M be a P 0 -matrix, D a and D b be negative definite diagonal matrices. Then D a + D b M is nonsingular.

Let Φ: R n R n be locally Lipschitz continuous; by Rademacher’s theorem, Φ is differentiable almost everywhere.

Definition 5 Let D Φ denote the set {x R n |Φ is differentiable at x}, then the B-subdifferential of Φ at x is defined as

B Φ(x)= { v R n × n | v = lim x k D Φ x k x Φ ( x k ) } .

The Clarke subdifferential of Φ at x is defined as

Φ(x)=co B Φ(x),

where co denotes the convex hull of a set.

By the definition of Φ λ , we know that Φ λ is not differentiable at x if x i =0= F i (x) for some i. However, since Φ λ is locally Lipschitz continuous [[6], Lemma 2.1], B Φ λ (x) is nonempty at every x R n . But how to specify the set B Φ λ (x) exactly at x where Φ λ (x) does not exist?

To solve this problem, we construct two mappings H ˜ and H ˆ which approximate B Φ λ . For a set X, we denote the power set of X by P(X).

Define the mapping H ˜ : R n P( R n × n ) as

H ˜ (x)= { H ˜ R n × n | H ˜ = D a ˜ + D b ˜ F ( x ) , ( a ˜ , b ˜ ) Ω ˜ ( x ) } ,

where Ω ˜ : R n P( R 2 n ) is given by

Ω ˜ (x)= { ( a ˜ , b ˜ ) R 2 n | ( a ˜ i , b ˜ i ) Ω ˜ i ( x ) , i = 1 , 2 , , n }

with

Ω ˜ i (x)= { { ( a ˜ i , b ˜ i ) R 2 | ( a ˜ i + 1 ) 2 + ( b ˜ i + 1 ) 2 C λ } , if  x i = 0 = F i ( x ) , { ( a ˜ i , b ˜ i ) R 2 | a ˜ i = a ˆ i , b ˜ i = b ˆ i } , otherwise .
(2)

Here, C λ denotes the constant 2 λ ( 4 λ ) 8 , and

a ˆ i = 2 ( x i F i ( x ) ) + λ F i ( x ) 2 ( x i F i ( x ) ) 2 + λ x i F i ( x ) 1 , b ˆ i = 2 ( x i F i ( x ) ) + λ x i 2 ( x i F i ( x ) ) 2 + λ x i F i ( x ) 1 .

In the following, we define H ˆ similarly to H ˜ , which is a subset of H ˜ .

The mapping H ˆ : R n P( R n × n ) is defined by

H ˆ (x)= { H ˆ R n × n | H ˆ = D a ˆ + D b ˆ F ( x ) , ( a ˆ , b ˆ ) Ω ˆ ( x ) } ,

where Ω ˆ : R n P( R 2 n ) is defined by

Ω ˆ (x)= { ( a ˆ , b ˆ ) R 2 n | ( a ˆ , b ˆ ) = ( g ( x , z ) , h ( x , z ) ) , z Z ( x ) } .

Here Z(x)={z R n | z i 0, if iβ}, and β denotes the set {i| x i =0= F i (x)}. The components of a vector g(x,z) are given by

g i (x,z)={ 2 ( z i F i T ( x ) z ) + λ F i T ( x ) z 2 ( z i F i T ( x ) z ) 2 + λ z i F i T ( x ) z 1 , if  x i = 0 = F i ( x ) , 2 ( x i F i ( x ) ) + λ F i ( x ) 2 ( x i F i ( x ) ) 2 + λ x i F i ( x ) 1 , otherwise ;

and the components of a vector h(x,z) are given by

h i (x,z)={ 2 ( z i F i T ( x ) z ) + λ z i 2 ( z i F i T ( x ) z ) 2 + λ z i F i T ( x ) z 1 , if  x i = 0 = F i ( x ) , 2 ( x i F i ( x ) ) + λ x i 2 ( x i F i ( x ) ) 2 + λ x i F i ( x ) 1 , otherwise .

Remark From (2), we find that, for every x R n , ( a ˜ , b ˜ ) Ω ˜ (x) satisfies C λ 1 a ˜ i , b ˜ i 0 (see [[6], Proposition 2.6]), and a ˜ i , b ˜ i do not vanish simultaneously. It is the same with elements in H ˆ .

The mappings H ˜ and H ˆ have the following property which will play an important role in our analysis.

Theorem 6 For an arbitrary x R n , we have H ˆ (x) B Φ λ (x) H ˜ (x).

Proof B Φ λ (x) H ˜ (x) was shown in [[6], Proposition 2.5]. Hence, we prove H ˆ (x) B Φ λ (x) in the following.

For an arbitrary H ˆ H ˆ (x), we shall build a sequence of points { y k } where Φ λ is differentiable at every y k and such that Φ λ ( y k ) T tends to H ˆ ; then the theorem will be obtained by the definition of B-subdifferential.

Let y k =x+ ε k z, where zZ(x) and { ε k } is a sequence of positive numbers converging to 0. If iβ, either x i 0 or F i (x)0, and z i 0 for all iβ.

We can see, by continuity, that if ε k is small enough, then for each i, either y i k 0 or F i ( y k )0, so Φ λ is differentiable at y k . If iβ, by continuity, the i th row of Φ λ ( y k ) T tends to the i th row of H ˆ . So, we only concern the case of iβ.

From [[6], Proposition 2.5], we know that the i th row of Φ λ ( y k ) T is

( a i ( y k ) 1 ) e i T + ( b i ( y k ) 1 ) F i ( y k ) T ,
(3)

where

a i ( y k ) = 2 ( ε k z i F i ( y k ) ) + λ F i ( y k ) 2 ( ε k z i F i ( y k ) ) 2 + λ ε k z i F i ( y k ) , b i ( y k ) = 2 ( ε k z i F i ( y k ) ) + λ ε k z i 2 ( ε k z i F i ( y k ) ) 2 + λ ε k z i F i ( y k ) .

By Taylor-expansion, we have, for each iβ,

F i ( y k ) = F i (x)+ ε k F i ( ξ k ) T z= ε k F i ( ξ k ) T z,with  ξ k x.
(4)

Substituting (4) into (3) and passing to limit, we have, by the continuity of F, that the rows of Φ λ ( y k ) T tend to the corresponding rows of H ˆ when iβ. Hence, Φ λ ( y k ) T tends to H ˆ . □

In this paper, we present two algorithms. The first one, which is presented in Section 3, uses matrices obtained by perturbing H ˜ H ˜ . We will establish its global convergence. While in Section 4, we present another algorithm based on H ˆ H ˆ , which is a restricted version of the first one. The second algorithm can be superlinearly convergent.

3 Algorithm and global convergence

Considering the Newton-type method, the direction-finding problem is solved by H ˜ d= Φ λ ( x k ), where H ˜ H ˜ ( x k ). However, H ˜ is not necessarily nonsingular. In this section, we will perturb H ˜ to G ˜ , which is nonsingular. Then a search direction can be obtained by solving G ˜ d= Φ λ ( x k ). Now, let us construct G ˜ as follows.

First, mapping Λ i : R n + 2 P( R 2 ), i=1,2,,n are defined by

Λ i (x, a i , b i )= { { ( a ¯ i , b ¯ i ) R 2 | a ¯ i = σ ( θ λ ( x ) ) b i b ¯ i = 0 } , if  ε < a i  and  b i ε , { ( a ¯ i , b ¯ i ) R 2 | a ¯ i = τ σ ( θ λ ( x ) ) b i b ¯ i = ( 1 τ ) σ ( θ λ ( x ) ) a i τ [ 0 , 1 ] } , if  a i ε  and  b i ε , { ( a ¯ i , b ¯ i ) R 2 | a ¯ i = 0 b ¯ i = σ ( θ λ ( x ) ) a i } , if  a i ε  and  ε < b i ,

where ε(0,1 C λ / 2 ), and σ: R + R + is a nondecreasing continuous function such that σ(0)=0 and σ(t)>0 for all t>0.

Because ε(0,1 C λ 2 ), it is obvious that for (a,b) Ω ˜ (x), the case of ε< a i and ε< b i will not happen.

In the following, we construct G ˜ as

G ˜ = D p ˜ + D q ˜ F (x),

where p ˜ and q ˜ are vectors such that

( p ˜ i , q ˜ i )=( a ˜ i + a ¯ i , b ˜ i + b ¯ i ),i=1,2,,n,
(5)

with ( a ˜ , b ˜ ) Ω ˜ (x), and ( a ¯ i , b ¯ i ) Λ i (x, a ˜ i , b ˜ i ), i=1,2,,n.

If θ λ (x)>0, the definition of Λ i and (5) imply that both D p ˜ , D p ˜ are negative definite matrices. Furthermore, we define G ˜ : R n P( R n × n ) as follows:

G ˜ (x)= { G ˜ R n × n | G ˜ = D p ˜ + D q ˜ F ( x ) , ( p ˜ , q ˜ )  is defined by (5) with  ( a ˜ , b ˜ ) Ω ˜ ( x )  and  ( a ¯ i , b ¯ i ) Λ i ( x , a ˜ i , b ˜ i ) for  i = 1 , 2 , , n } .

It is obvious that G ˜ = D p ˜ + D q ˜ F (x) and H ˜ = D a ˜ + D b ˜ F (x) are closely related. G ˜ is nonsingular under proper conditions.

Theorem 7 If x is not a solution of NCP(F), i.e., θ λ (x)>0, then every G ˜ G ˜ (x) is nonsingular.

Proof For every G ˜ G ˜ (x), if θ λ (x)>0, then it follows from the definition of G ˜ that D p ˜ and D q ˜ are negative definite matrices.

Since F is a P 0 -function, the Jacobian of F is a P 0 -matrix. So, F ( x k ) is a P 0 -matrix. Hence, by Theorem 4, G ˜ is nonsingular. □

By the mapping G ˜ , we define Δ ˜ : R n P( R n ) as

Δ ˜ (x)= { d R n | G ˜ d = Φ λ ( x ) , G ˜ G ˜ ( x ) } .

It is easy to see that Δ ˜ (x) is nonempty for every x such that θ λ (x)>0. Now we give the first algorithm.

Algorithm 1 Step 1. Initialization: choose λ(0,4), x 0 R n , ρ(0,0.5), β(0,1), and set k:=0.

Step 2. Termination criterion: if θ λ (x)=0, stop. Otherwise, go to Step 3.

Step 3. Search direction calculation: find a vector d k Δ ˜ ( x k ).

Step 4. Line search: let m be the smallest nonnegative integer such that

θ λ ( x k + β m d k ) θ λ ( x k ) β m ρ θ λ ( x k ) T d k .

Step 5. Update: set x k + 1 := x k + t k d k , where t k = β m , k:=k+1, and go to Step 2.

It is obvious that if θ λ ( x k )=0, then x k is a solution of NCP(F). Next, we will prove the global convergence of Algorithm 1. First, we show that every d Δ ˜ (x) is a descent direction of θ λ at x.

Lemma 8 (see [[8], Lemma 3.2])

If x is not a solution of NCP(F), i.e., θ λ (x)>0, then every d Δ ˜ (x) satisfies the descent condition for θ λ , i.e., θ λ ( x ) T d<0.

Theorem 9 Every accumulation point of a sequence { x k } generated by Algorithm  1 is a solution of NCP(F).

Proof Owing to Step 4, { θ λ ( x k )} is decreasing monotonically and nonnegative. It must converge to some θ λ 0. We assume θ λ >0. Let x be an accumulation point of { x k } and { x k } k K be a subsequence converging to x .

Δ ˜ is uniformly compact near x and closed at x (see [[8], Lemma 3.4]), we assume, without loss of generality, that lim k K k d k = d Δ ˜ ( x ). From Lemma 8, we will get the contradiction if we can prove θ λ ( x ) T d =0. This can be obtained by considering the following two cases:

• Suppose that inf{ t k }t>0. Then we have

θ λ ( x k + t k d k ) θ λ ( x k ) t k ρ θ λ ( x k ) T d k 0.

It is obvious that θ λ ( x ) T d =0 is satisfied.

• Suppose that inf{ t k }=0. In this case, we assume lim k K k t k =0 without loss of generality. By line search, we have

θ λ ( x k + t k β d k ) θ λ ( x k ) t k β >ρ θ λ ( x k ) T d k ,

taking the limit yields θ λ ( x ) T d ρ θ λ ( x ) T d . Since ρ(0,0.5), we have θ λ ( x ) T d 0. Hence, θ λ ( x ) T d =0.

We get the contradiction. The proof is complete. □

4 Modified algorithm and fast convergence

In the above section, we established global convergence of Algorithm 1. It determines a search direction based on H ˜ which contains the generalized Jacobian B Φ λ (x). However, it is hard for us to show the superlinear convergence of Algorithm 1. In the following, we should modify the search direction properly to accelerate the convergence of algorithm. By the definition of H ˆ , we know that H ˆ H ˆ is not necessarily nonsingular. Can we perturb H ˆ similar to H ˜ ? Next, we give a positive answer to this question.

Define G ˆ as

G ˆ = D p ˆ + D q ˆ F (x),

where p ˆ and q ˆ are vectors such that

( p ˆ i , q ˆ i )=( a ˆ i + a ¯ i , b ˆ i + b ¯ i ),i=1,2,,n,
(6)

with ( a ˆ , b ˆ ) Ω ˆ (x), and ( a ¯ i , b ¯ i ) Λ i (x, a ˆ i , b ˆ i ), i=1,2,,n.

If θ λ (x)>0, the definition of Λ i and (6) imply that both D p ˆ , D p ˆ are negative definite matrices.

Mapping G ˆ : R n P( R n × n ) is defined by

G ˆ (x)= { G ˆ R n × n | G ˆ = D p ˆ + D q ˆ F ( x ) , ( p ˆ , q ˆ )  is defined by (6) with  ( a ˆ , b ˆ ) Ω ˆ ( x )  and  ( a ¯ i , b ¯ i ) Λ i ( x , a ˆ i , b ˆ i ) for  i = 1 , 2 , , n } .

From Theorem 6, H ˆ H ˜ . It is obvious that G ˆ G ˜ . And from Theorem 7, every G ˆ G ˆ (x) is nonsingular if θ λ (x)>0.

Define Δ ˆ : R n P( R n ) as

Δ ˆ (x)= { d R n | G ˆ d = Φ λ ( x ) , G ˆ G ˆ ( x ) } .

For any x, since G ˆ (x) G ˜ (x), we obtain Δ ˆ (x) Δ ˜ (x). Hence, Δ ˆ (x) is nonempty for every x such that θ λ (x)>0. Next, we will give the second algorithm. The search direction is chosen from Δ ˆ (x). The only difference from Algorithm 1 is the search direction.

Algorithm 2 Step 1. Initialization: choose λ(0,4), x 0 R n , ρ(0,0.5), β(0,1), and set k:=0.

Step 2. Termination criterion: if θ λ (x)=0, stop. Otherwise, go to Step 3.

Step 3. Search direction calculation: find a vector d k Δ ˆ ( x k ).

Step 4. Line search: let m be the smallest nonnegative integer such that

θ λ ( x k + β m d k ) θ λ ( x k ) β m ρ θ λ ( x k ) T d k .

Step 5. Update: set x k + 1 := x k + t k d k , where t k = β m , k:=k+1, and go to Step 2.

Since Δ ˆ ( x k ) Δ ˜ ( x k ) at each x k as mentioned above, global convergence of Algorithm 2 is directly obtained from Theorem 9. We state the following theorem without proof.

Theorem 10 Every accumulation point of a sequence { x k } generated by Algorithm  2 is a solution of NCP(F).

In the following, we focus our attention on the superlinear convergence rate of Algorithm 2. To begin with, we assume that the sequence { x k } generated by Algorithm 2 has a unique limit point x .

Lemma 11 We have x k + d k x =o( x k x ).

Proof For each k, we have

x k + d k x = x k G ˆ k 1 Φ λ ( x k ) x = G ˆ k 1 ( Φ λ ( x ) Φ λ ( x k ) + H ˆ k ( x k x ) + ( G ˆ k H ˆ k ) ( x k x ) ) G ˆ k 1 ( Φ λ ( x ) Φ λ ( x k ) + H ˆ k ( x k x ) + G ˆ k H ˆ k x k x ) ,

where H ˆ k H ˆ ( x k ) is the matrix corresponding to G ˆ k . Since Φ λ is semismooth [[6], Lemma 2.2] and H ˆ ( x k ) B Φ λ ( x k ) for each k, by Theorem 6, we have

Φ λ ( x ) Φ λ ( x k ) + H ˆ k ( x k x ) =o ( x k x )

(see the proof of [[9], Theorem 3.1]). Moreover, by the definition of Λ i and (6), we have

G ˆ k H ˆ k =O ( σ ( θ λ ( x k ) ) ) .

Consequently, it follows that

x k + d k x G ˆ k 1 ( o ( x k x ) + O ( σ ( θ λ ( x k ) ) ) x k x ) .

Since σ( θ λ ( x k ))0 and { G ˆ k 1 } is bounded (see the proof of [[8], Lemma 4.2]), we obtain the desired result. □

Now, we prove the superlinear convergence of Algorithm 2.

Theorem 12 Algorithm  2 has a superlinear rate of convergence.

Proof We have x k + 1 = x k + d k for all k sufficiently large (see the proof of [[8], Lemma 4.6]). It then follows from Lemma 11 that

lim k x k + 1 x x k x =0.

The proof is complete. □

5 Numerical results

In this section, we do some preliminary numerical experiments to test Algorithm 2 and compare its performance with that of the algorithms proposed in Chen and Pan [2] and Sun and Zeng [10].

First, we set β=0.5, ρ= 10 4 , ε=0.05 and σ(t)=0.1min{1,t}.

For zZ(x), we define

z i ={ 1 , if  x i k = F i ( x k ) = 0 , 0 , otherwise , i=1,2,,n.

The stopping criterion for Algorithm 2 is θ λ ( x k ) 10 8 . The programs are coded in MATLAB and run on a personal computer with a 2.1 GHZ CPU processor.

The meaning of the columns in the tables are stated as follows:

iter: the total iteration number, resi: the value of θ λ (x).

Problem 1 Let F(x)=Ax+q, where

A= [ 3 1 0 0 1 3 1 0 0 0 0 1 0 0 0 3 ] ,q= ( 1 , , 1 ) T .

The corresponding complementarity problem has the unique solution. Table 1 lists the test results for Problem 1 with different n, λ and initial points a= ( 5 , , 5 ) T , b= ( 0 , , 0 ) T , c= ( 1 , , 1 ) T , d= ( 8 , , 8 ) T .

Table 1 Test results for Problem 1

From Table 1, we see that the test results for λ(0,1) are better than for other cases. Especially, the good numerical results are obtained when λ closes to 0. Then we compare the test results with Chen and Pan [2], where we set p=2, ε=1.0e08, σ=1.0e10, β=0.2 for convenience. Table 2 lists the test results for [2].

Table 2 Test results for Chen and Pan [2] on Problem 1

Tables 1 and 2 indicate that Algorithm 2 performed much better than Chen and Pan [2] did on Problem 1.

Problem 2 Free boundary problems can also be solved by the method we presented. The following problem arises from the discretization of a free boundary problem (see [10]). Let Ω=(0,1)×(0,1) and a function g satisfy g(x,0)=x(1x), g(x,y)=0 on x=0,1 or y=1.

Consider the following problem: find u such that

{ u 0 in  Ω , Δ u + f ( u , x , y ) 8 ( y 0.5 ) 0 in  Ω , u ( Δ u + f ( u , x , y ) 8 ( y 0.5 ) ) = 0 in  Ω , u = g on  Ω ,

where f(u,x,y) is a continuously differentiable P 0 -function. We discretize the problem by the five-point difference scheme with mesh-step h. Then we get the following complementarity problem: find x R n such that

x0,Ax+Ψ(x)0and x T ( A x + Ψ ( x ) ) =0.

Set initial point as x 0 = ( 0 , , 0 ) T . Table 3 lists the test results with different functions f, λ, and h.

Table 3 Test results for Problem 2

From Table 3, we have the following observations.

• Our test results become better when λ decreases. It is obvious that when λ=2 the result is not good enough. That is to say, Algorithm 2 with the NCP-function ϕ λ (0<λ<2) is better than the one discussed in [8] where the Fischer-Burmeister function was used.

• Whether the function f(u,x,y) is linear or nonlinear, the test results are good. The results are especially better when λ closes to 0.

We compare the test results with Sun and Zeng [10] where we set β=0.5, c= 0.5 4 . Table 4 lists the test results for [10] with different functions f and h.

Table 4 Test results for Sun and Zeng [10] on Problem 2

Tables 3 and 4 indicate that Algorithm 2 performed as well as Sun and Zeng [10] did on Problem 2.

Problem 3 We implemented Algorithm 2 for some test problems with all available starting points in MCPLIB [11]. The results are reported in Table 5 with seconds for unit of time.

Table 5 Test results for MCPLIB problems

The above examples indicate that the results are better when λ closes to 0. A reasonable interpretation for this is that the values of g i (x,z) and h i (x,z) become smaller when λ increases and hence causes some difficulty for Algorithm 2. This also implies that the performance of Algorithm 2 will become worse when p increases. When λ0, the NCP obviously reduces to min{x,F(x)}=0. But it is a nonsmooth equation so we cannot use this method.

6 Concluding remarks

In this paper, we have studied a class of one-parametric NCP-functions ϕ λ (,) which include the well-known Fischer-Burmeister function as a special case and proposed modified Newton-type algorithms for solving P 0 complementarity problems.

Numerical results for the test problems have shown that this method is promising when λ(0,4). Moreover, our numerical results indicated that the performance of the modified Newton-type method becomes better when λ decreases, which is a new and important numerical result. We believe that Algorithm 2 can effectively solve more practical problems if they can be reformulated as an NCP(F). We leave this as a future research topic.

References

  1. Chen J-S: On some NCP-functions based on the generalized Fischer-Burmeister function. Asia-Pac. J. Oper. Res. 2007, 24: 401–420. 10.1142/S0217595907001292

    MathSciNet  Article  MATH  Google Scholar 

  2. Chen J-S, Pan S: A family of NCP functions and a descent method for the nonlinear complementarity problem. Comput. Optim. Appl. 2008, 40: 389–404. 10.1007/s10589-007-9086-0

    MathSciNet  Article  MATH  Google Scholar 

  3. Fischer A: A special Newton-type optimization methods. Optimization 1992, 24: 269–284. 10.1080/02331939208843795

    MathSciNet  Article  MATH  Google Scholar 

  4. Hu SL, Huang ZH, Chen J-S: Properties of a family of generalized NCP-functions and a derivative free algorithm for complementarity problems. J. Comput. Appl. Math. 2009, 230(1):69–82. 10.1016/j.cam.2008.10.056

    MathSciNet  Article  MATH  Google Scholar 

  5. Hu SL, Huang ZH, Lu N: Smoothness of a class of merit functions for the second-order cone complementarity problem. Pac. J. Optim. 2010, 6(3):551–571.

    MathSciNet  MATH  Google Scholar 

  6. Kanzow C, Kleinmichel H: A new class of semismooth Newton-type methods for nonlinear complementarity problems. Comput. Optim. Appl. 1998, 11: 227–251. 10.1023/A:1026424918464

    MathSciNet  Article  MATH  Google Scholar 

  7. Lu LY, Huang ZH, Hu SL: Properties of a family of merit functions and a derivative-free method for the NCP. Appl. Math. J. Chin. Univ. Ser. A 2010, 25(4):379–390. 10.1007/s11766-010-2179-z

    MathSciNet  Article  MATH  Google Scholar 

  8. Yamashita N, Fukushima M: Modified Newton methods for solving a semismooth reformulation of monotone complementarity problems. Math. Program. 1997, 76: 469–491.

    MathSciNet  MATH  Google Scholar 

  9. Qi L: Convergence analysis of some algorithms for solving nonsmooth equations. Math. Oper. Res. 1993, 18: 227–244. 10.1287/moor.18.1.227

    MathSciNet  Article  MATH  Google Scholar 

  10. Sun Z, Zeng JP: A monotone semismooth Newton type method for a class of complementarity problem. J. Comput. Appl. Math. 2011, 235: 1261–1274. 10.1016/j.cam.2010.08.012

    MathSciNet  Article  MATH  Google Scholar 

  11. Billups SC, Dirkse SP, Soares MC: A comparison of algorithms for large scale mixed complementarity problems. Comput. Optim. Appl. 1997, 7: 3–25. 10.1023/A:1008632215341

    MathSciNet  Article  MATH  Google Scholar 

Download references

Acknowledgements

This work was supported by the NSFC (50975200).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zijun Deng.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

WX participated in the design of the algorithm. ZD performed the numerical experiment and statistical analysis. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

Xie, W., Deng, Z. Modified Newton-type methods for the NCP by using a class of one-parametric NCP-functions. J Inequal Appl 2012, 286 (2012). https://doi.org/10.1186/1029-242X-2012-286

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2012-286

Keywords

  • nonlinear complementarity problem
  • NCP-function
  • generalized Newton method
  • global convergence
  • superlinear convergence