Skip to main content

A smoothing and regularization predictor-corrector method for nonlinear inequalities

Abstract

For a system of nonlinear inequalities, we approximate it by a family of parameterized smooth equations via a new smoothing function. We present a new smoothing and regularization predictor-corrector algorithm. The global and local superlinear convergence of the algorithm is established. In addition, the smoothing parameter μ and the regularization parameter ε in our algorithm are viewed as different independent variables. Preliminary numerical results show the efficiency of the algorithm.

MSC:90C33, 90C30, 15A06.

1 Introduction

Consider the following system of nonlinear inequalities:

f(x)0,
(1.1)

where f(x)= ( f 1 ( x ) , f 2 ( x ) , , f n ( x ) ) and f i : R n R is a continuously differentiable function for i=1,2,,n. This problem finds applications in data analysis, set separation problems, computer-aided design problems and image reconstructions [13]. Among various solution methods for the inequality problems [410], the smoothing-type methods receive much attention [810] which first transform the problem as a system of nonsmooth equations and approximate it by a smooth equation and then solve it by the smoothing Newton methods. Since the derivative of the underlying mapping may be seriously ill-conditioned, which may prevent the smoothing methods from converging to a solution of the problem, a perturbed regularization technique is introduced to overcome this drawback [9, 11, 12]. In 2003, Huang et al. proposed a predictor-corrector smoothing Newton method for nonlinear complementarity problem with a P 0 function based on the perturbed minimum function [13]. The method was shown to be locally superlinear convergent under the assumptions that all VH( z ) are nonsingular and f (x) is locally Lipschitz continuous around x .

In this paper, motivated by the smoothed penalty function for constrained optimization [14], we construct a new smoothing function for nonlinear inequalities, and thus we can approximate the nonsmooth system of transformed equations by a system of smooth equations. We develop a regularization smoothing predictor-corrector method for solving the problem by modifying and extending the method in [13]. Besides choosing an arbitrarily starting point, the presented algorithm is simpler than the predictor-corrector noninterior continuation methods developed by Burke and Xu [15].

The rest of this paper is organized as follows. In Section 2, we review some preliminaries to be used in the subsequent analysis and introduce a new smoothing function and its properties. In Section 3, we present a smoothing and regularization predictor-corrector method for solving the nonlinear inequalities and establish the global and local convergence of the proposed algorithm. Preliminary numerical experiments are reported to show the efficiency of the algorithm in Section 5.

To end this section, we introduce some notations used in this paper. The set of m×k matrices with real entries is denoted by R m × k , R + n ( R + + n ) denotes the nonnegative (positive) orthant in R n . The superscript denotes the transpose of a matrix or a vector. Define N={1,2,,n}, and for any vector a R n , we let D a denote the diagonal matrix whose i-th diagonal element is a i . u denotes the 2-norm of a vector u R n . For a continuously differentiable function f: R n R m , we denote the Jacobian of f at x R n by f (x).

2 Smooth reformulation of nonlinear inequalities

In this section, we first review some definitions and basic results, and then introduce a new smoothing function and show its properties.

Definition 2.1 A matrix M R n × n is said to be a P 0 -matrix if every principle minor of M is nonnegative.

Definition 2.2 A function F: R n R n is said to be a P 0 -function if for all x,y R n with xy, there exists an index i 0 N such that

x i 0 y i 0 ,( x i 0 y i 0 ) [ F i 0 ( x ) F i 0 ( y ) ] 0.

For a P 0 -matrix, the following conclusion holds [16].

Lemma 2.1 If M R n × n is a P 0 -matrix, then every matrix of the form

D a + D b M

is nonsingular for all positive definite diagonal matrices D a , D b R n × n .

Definition 2.3 Suppose that G: R n R m is a locally Lipschitz function. G is said to be semi-smooth at x if G is directionally differentiable at x and

lim V G ( x + t h ) , h h , t 0 + { V h }

exists for any h R n , where G(x) denotes the generalized derivative in [17].

The concept of semi-smoothness was originally introduced by Mifflin for functions [18]. Qi and Sun extended the definition of a semi-smooth function to vector-valued functions [19]. Convex functions, smooth functions, piecewise linear functions, convex and concave functions, and sub-smooth functions are examples of semi-smooth functions. A function is semi-smooth at x if and only if all its component functions are. The composition of semi-smooth functions is still a semi-smooth function.

Lemma 2.2 [19]

Suppose that φ: R n R m is a locally Lipschitz function semi-smooth at x. Then

  1. (a)

    for any Vφ(x+th), h0,

    Vh φ (x;h)=o ( h ) ,
  2. (b)

    for any h0,

    φ(x+h)φ(x) φ (x;h)=o ( h ) .

For problem (1.1), based on the function

x + = ( max { 0 , x 1 } , , max { 0 , x n } ) ,for x R n ,

it can be transformed into the following system of equations [8]:

f ( x ) + =0.
(2.1)

Since problem (2.1) is a nonsmooth equation, the classical Newton methods cannot be used to solve it. Following the ideas in [14, 20, 21], we adopt the following smoothing function to approximate the nonsmooth equation

ϕ(μ,a)= 1 2 ( a + μ ( ln 2 + ln ( 1 + cosh a μ ) ) ) ,
(2.2)

where a smoothing parameter μ>0 and coshx= e x + e x 2 .

This new smoothing function has the following properties.

Lemma 2.3 For any (μ,a) R + + × R n , it holds that

  1. (1)

    ϕ(,) is continuously differentiable at any (μ,a) R + + × R n .

  2. (2)

    Let ϕ(0,a)= lim μ 0 ϕ(μ,a), then ϕ(0,a)= a + .

  3. (3)

    ϕ ( μ , a ) a 0 at any (μ,a) R + + × R n .

Proof (1) is straightforward, so we only prove (2) and (3).

For (2), following the ideas in [20], we have

Furthermore, one can obtain the estimate [20]

| | a | φ ( μ , a ) | 8 μ 3 e | a | μ .

Then ϕ(0,a)= a + .

For (3), by a simple calculation, we can show

φ ( μ , a ) a = sinh a μ 1 + cosh a μ (1,1),

then ϕ ( μ , a ) a 0 at any (μ,a) R + + × R n . We complete the proof. □

Let z=(μ,ε,x) R + + × R + + × R n and

H(z)=H(μ,ε,x)= ( μ ε Φ ( z ) ) ,
(2.3)

where

Φ(z)=Φ(μ,εx)= ( ϕ ( μ , f 1 ( x ) ) ϕ ( μ , f n ( x ) ) ) +εx.
(2.4)

Define a merit function

ψ(z)= H ( z ) 2 = μ 2 + ε 2 + Φ ( z ) 2 .
(2.5)

Then the inequalities (1.1) can be reformulated as the following nonlinear equations:

H(z)=H(μ,ε,x)=0.
(2.6)

Theorem 2.1 Let H(μ,ε,x) be defined as (2.3). Then

  1. (a)

    H(μ,ε,x) is continuously differentiable at any z=(μ,ε,x) R + + × R + + × R n with its Jacobian

    H (z)= ( 1 0 0 0 1 0 Φ μ ( z ) x Φ x ( z ) ) ,
    (2.7)

where

Φ μ ( z ) = { ϕ μ ( μ , f 1 ( x ) ) , , ϕ μ ( μ , f n ( x ) ) } , Φ x ( z ) = ϕ x ( μ , f ( x ) ) + ε I = 1 2 diag { ( 1 + sinh f i ( x ) μ 1 + cosh f i ( x ) μ ) : i N } f ( x ) + ε I .
  1. (b)

    If f is a P 0 -function, then H (z) is nonsingular at any R + + × R + + × R n .

Proof (a) is straightforward, so we only prove (b). For (b), we only need to show Φ x (z) is nonsingular. In fact, since f is a P 0 -function, then f (x) is a P 0 -matrix for all x R n by Theorem 3.3 in [22]. We also note that diag{(1+ sinh f i ( x ) μ 1 + cosh f i ( x ) μ ):iN} and εI are positive diagonal matrices, we know that Φ x (z) is nonsingular by Lemma 2.1, which implies that H (z) is also nonsingular. This completes the proof. □

3 Algorithm and convergence

In this section, we first describe our algorithm and then we reveal the global convergence analysis of the algorithm. Now, we are at a position to give the description of our smoothing predictor-corrector algorithm.

Algorithm 3.1

Step 0. Take δ(0,1), σ(0,1). Let e 0 =( μ 0 , ε 0 ,0) R + + × R + + × R n and x 0 R n is an arbitrary point. Choose z 0 =( μ 0 , ε 0 , x 0 ) and parameter γ(0,1) such that γH( z 0 )1, γ μ 0 +γ ε 0 <1. Set k=0.

Step 1. If H( z k )=0, then stop. Otherwise, let β k =β( z k ) where β(z) is defined by β(z)=γH(z).

Step 2. Predictor step. If H( z k )1, set z ˆ k := z k and go to Step 3. Otherwise, compute Δ z ˆ k =(Δ μ ˆ k ,Δ ε ˆ k ,Δ x ˆ k )R×R× R n by

H ( z k ) + H ( z k ) Δ z k = β k H ( z k ) e 0 .
(3.1)

If

ψ ( z k + Δ z ˆ k ) ψ 2 ( z k ) ,
(3.2)

then set z ˆ k = z k +Δ z ˆ k . Otherwise, set z ˆ k = z k .

Step 3. Corrector step. If H( z k )=0, then stop. Otherwise, compute Δ z ˜ k =(Δ μ ˜ k ,Δ ε ˜ k ,Δ x ˜ k )R×R× R n by

H ( z ˆ k ) + H ( z ˆ k ) Δ z k =β ( z ˆ k ) e 0 .
(3.3)

Let l k be the smallest nonnegative integer l such that

ψ ( z ˆ k + δ l Δ z ˜ k ) [ 1 σ ( 1 γ ( μ 0 + ε 0 ) ) δ l ] ψ ( z ˆ k ) .
(3.4)

Set λ k = δ l k and z k + 1 := z ˆ k + λ k Δ z ˜ k .

Step 4. Set k:=k+1 and return to Step 1.

Remark 3.1 If H( z k )1, then Algorithm 3.1 solves only one linear system of equations at each iteration. Otherwise, it solves two linear systems of equations at each iteration. However, the coefficient matrices of these two systems are identical when (3.2) is not satisfied. There are the same points as the algorithm in [13], the neighborhood of the path does not appear in the algorithm, thus, it does not need a few additional computations which keep the iteration sequence staying in the given neighborhood.

To prove the convergence of Algorithm 3.1, first, define the set

Ω= { z = ( μ , ε , x ) R + + × R + + × R n | μ β ( z ) μ 0 , ε β ( z ) ε 0 } .

The following lemmas show that Algorithm 3.1 is well defined and generates an infinite sequence with some good features.

Lemma 3.1 If f is a continuously differentiable P 0 -function, then Algorithm  3.1 is well defined. In addition, μ k >0, ε k >0 and z k =( μ k , ε k , x k )Ω for any k0.

Proof Since f is a continuously differentiable P 0 function, then it follows from Theorem 2.1 that the matrix H (z) is nonsingular for u>0, ε>0. Since u 0 >0, ε 0 >0 by the choice of an initial point, we may assume, without loss of generality, that μ k >0, ε k >0, we show that μ ˆ k >0, ε ˆ k >0. If the predictor step is accepted, then by (3.1),

(3.5)
(3.6)

otherwise, we have z k = z ˆ k , which means μ k = μ ˆ k , ε k = ε ˆ k . Thus, we obtain μ ˆ k >0, ε ˆ k >0. Furthermore, H ( z k ) and H ( z ˆ k ) are nonsingular which means that (3.1) and (3.3) are well defined.

Given k0, for any α(0,1], let

R(α)=H ( z ˆ k + α Δ z ˜ k ) H ( z ˆ k ) α H ( z ˆ k ) Δ z ˜ k .
(3.7)

Noting H() is continuously differentiable, we obtain R(α)=o(α). Then, it follows from (3.3) and (3.7) that

H ( z ˆ k + α Δ z ˜ k ) = H ( z ˆ k ) + α H ( z ˆ k ) Δ z ˜ k + R ( α ) ( 1 α ) H ( z ˆ k ) + R ( α ) + α γ H ( z ˆ k ) e 0 ( 1 α ) H ( z ˆ k ) + α γ ( μ 0 + ε 0 ) H ( z ˆ k ) + o ( α ) [ 1 ( 1 γ ( μ 0 + ε 0 ) ) α ] H ( z ˆ k ) + o ( α ) .
(3.8)

Therefore, from (3.8), it shows that there exists a positive number α ¯ (0,1] such that for all α(0, α ¯ ] and σ(0,1),

H ( z ˆ k + α Δ z ˜ k ) [ 1 σ ( 1 γ ( μ 0 + ε 0 ) ) α ] H ( z ˆ k )
(3.9)

holds, which implies

ψ ( z ˆ k + α Δ z ˜ k ) [ 1 σ ( 1 γ ( μ 0 + ε 0 ) ) α ] ψ ( z ˆ k ) .

That is, the nonnegative l k satisfying (3.4) can be found, which demonstrates that (3.4) is well defined.

For k=0, since β( z 0 )=γH( z 0 )1, we know z 0 Ω. Assuming now that z i Ω is true for i=0,1,,k, we show that it continues to hold for k+1. If the predictor step is accepted, then it follows from (3.5), (3.6) and (3.2) that

μ ˆ k = β k H ( z k ) μ 0 = γ ψ ( z k ) μ 0 γ ( ψ ( z k + Δ z ˆ k ) ) 1 / 2 μ 0 = γ H ( z k + Δ z ˆ k ) μ 0 = β ( z ˆ k ) μ 0
(3.10)

and

ε ˆ k = β k H ( z k ) ε 0 = γ ψ ( z k ) ε 0 γ ( ψ ( z k + Δ z ˆ k ) ) 1 / 2 ε 0 = γ H ( z k + Δ z ˆ k ) ε 0 = β ( z ˆ k ) ε 0 ,
(3.11)

which implies

z ˆ k Ω.
(3.12)

Otherwise, from z k = z ˆ k and the inductive assumption, we obtain that (3.12) also holds. Noting (3.3), we have

(3.13)
(3.14)

In addition, from (3.9) we know that there exists λ k (0,1) such that

H ( z ˆ k + λ k Δ z ˜ k ) [ 1 σ ( 1 γ ( μ 0 + ε 0 ) ) λ k ] H ( z ˆ k ) H ( z ˆ k ) .
(3.15)

Therefore, it follows from (3.12), (3.13) and (3.15) that

μ k + 1 β k + 1 μ 0 = ( 1 λ k ) μ ˆ k + λ k β ( z ˆ k ) μ 0 γ H ( z k + 1 ) μ 0 ( 1 λ k ) β ( z ˆ k ) μ 0 + λ k β ( z ˆ k ) μ 0 γ H ( z k + 1 ) μ 0 = β ( z ˆ k ) μ 0 γ H ( z k + 1 ) μ 0 = γ H ( z ˆ k ) μ 0 γ H ( z k + 1 ) μ 0 0 .
(3.16)

Similarly, we can obtain ε k + 1 β k + 1 ε 0 0. Thus, z k + 1 Ω.

Since u 0 >0, ε 0 >0, we may assume that μ k >0, ε k >0 for any given k0. From μ ˆ k >0, ε ˆ k >0, it follows from (3.13) that μ k + 1 >0, ε k + 1 >0. Hence, μ k >0, ε k >0 for any k0. □

Lemma 3.2 Suppose that the infinite sequence { z k =( μ k , ε k , x k )} is generated by Algorithm  3.1, then 0< μ k + 1 μ k ,0< ε k + 1 ε k and the sequence {H( z k )} is monotonically decreasing.

Proof For any k0, it follows from (3.12), (3.13) and (3.14) that

(3.17)
(3.18)

If the predictor step (Step 2) is not accepted at the k-th iterate, then (3.17) and (3.18) show the desired result. Otherwise, from (3.5), (3.6), H( z k )<1 and z k Ω, one has

(3.19)
(3.20)

Thus, we obtain that μ k + 1 μ k , ε k + 1 ε k hold for any k0.

If the predictor step (Step 2) is not accepted at the k-th iterate, then (3.15) implies that

H ( z k + 1 ) H ( z ˆ k ) = H ( z k ) ,

and the desired result has been obtained. Otherwise, it follows from (3.2) and H( z k )<1 that

H ( z ˆ k ) H ( z k ) 2 H ( z k ) .

Hence, for any k0, we obtain

H ( z k + 1 ) H ( z k ) ,

which means the sequence {H( z k )} is monotonically decreasing. □

Lemma 3.3 Assume that f is a P 0 -function and μ 1 , μ 2 , ε 1 , ε 2 are given positive numbers satisfying μ 1 < μ 2 , ε 1 < ε 2 . Then, H defined by (2.3) has the property

lim k + H ( z k ) =+

for any sequence {( μ k , ε k , x k )} such that μ 1 μ k μ 2 , ε 1 ε k ε 2 for any k and x k + as k+.

Proof We outline the proof by contradiction. Suppose that the lemma is not true. Then there exists a sequence { z k =( μ k , ε k , x k )} such that μ 1 μ k μ 2 , ε 1 ε k ε 2 , ψ( z k )ψ( z 0 ) but x k . Since the sequence { x k } is unbounded, the index set I={iN:{ x i k } is unbounded} is nonempty. Without loss of generality, we can assume that {| x i k |}+ for all iI. Then the following sequence { x ¯ k } is bounded which is defined by

x ¯ k ={ 0 , i I , x i k , i I .

Since f is a P 0 -function, by Definition 2.2, we have

0 max i N [ ( x i k x ¯ i k ) ( f i ( x k ) f i ( x ¯ k ) ] = max { 0 , max i I [ x i k ( f i ( x k ) f i ( x ¯ k ) ) ] } = x i 0 k [ f i 0 ( x k ) f i 0 ( x ¯ k ) ] ,
(3.21)

where i 0 is one of the indices for which the max is attained, and i 0 is assumed, without loss of generality, to be independent of k. Since i 0 I, one has {| x i 0 k |}+ as k. We now break up the proof into two cases.

Case 1. If x i 0 k + as k. In this case, since f i 0 ( x ¯ k ) is bounded, we deduce from (3.21) that f i 0 ( x k )> f i 0 ( x ¯ k ).

If f i 0 ( x ¯ k )< f i 0 ( x k )<+, for 0< μ 1 μ k μ 2 , ε 1 ε k ε 2 , letting k yields

1 2 ( f i 0 ( x k ) + μ k ( ln 2 + ln ( 1 + cosh f i 0 ( x k ) μ k ) ) )

is bounded and

1 2 ( f i 0 ( x k ) + μ k ( ln 2 + ln ( 1 + cosh f i 0 ( x k ) μ k ) ) ) + ε k x i 0 k +.

Thus, Φ( z k )+ as k.

If f i 0 ( x k )+, for 0< μ 1 μ k μ 2 , ε 1 ε k ε 2 , we have

1 2 ( f i 0 ( x k ) + μ k ( ln 2 + ln ( 1 + cosh f i 0 ( x k ) μ k ) ) ) +

and

1 2 ( f i 0 ( x k ) + μ k ( ln 2 + ln ( 1 + cosh f i 0 ( x k ) μ k ) ) ) + ε k x i 0 k +.

Thus, Φ( z k )+ as k.

Case 2. x i 0 k as k. In this case, since f i 0 ( x ¯ k ) is bounded, we deduce from (3.21) that f i 0 ( x k )< f i 0 ( x ¯ k ).

If < f i 0 ( x k )< f i 0 ( x ¯ k ), for 0< μ 1 μ k μ 2 , ε 1 ε k ε 2 , we have

1 2 ( f i 0 ( x k ) + μ k ( ln 2 + ln ( 1 + cosh f i 0 ( x k ) μ k ) ) )

is bounded and

1 2 ( f i 0 ( x k ) + μ k ( ln 2 + ln ( 1 + cosh f i 0 ( x k ) μ k ) ) ) + ε k x i 0 k .

Thus, Φ( z k )+ as k.

If f i 0 ( x k ), for 0< μ 1 μ k μ 2 , ε 1 ε k ε 2 , we have

1 2 ( f i 0 ( x k ) + μ k ( ln 2 + ln ( 1 + cosh f i 0 ( x k ) μ k ) ) ) = 1 2 ( f i 0 ( x k ) + μ k ( ln ( e f i 0 ( x k ) μ k ( 2 e f i 0 ( x k ) μ k + 1 + e 2 f i 0 ( x k ) μ k ) ) ) ) = 1 2 μ k ( ln ( 2 e f i 0 ( x k ) μ k + 1 + e 2 f i 0 ( x k ) μ k ) )

is bounded and

1 2 ( f i 0 ( x k ) + μ k ( ln 2 + ln ( 1 + cosh f i 0 ( x k ) μ k ) ) ) + ε k x i 0 k .

Thus, Φ( z k )+ as k.

In summary, we obtain ψ( z k )+ as k, which contradicts ψ( z k )ψ( z 0 ), and the proof is completed. □

Under the assumption of f being a P 0 -function, Lemma 3.2 and Lemma 3.3 indicate that the level set L μ ( z 0 ) defined by

L μ ( z 0 ) = { z R + + × R + + × R n | ψ ( z ) ψ ( z 0 ) }
(3.22)

is bounded.

To obtain the global convergence of Algorithm 3.1, we need the following assumption.

Assumption 3.1 The solution S:={x R n ,f(x)0} of (1.1) is nonempty and bounded.

Note that Assumption 3.1 seems to be the weakest condition used in the previous literature to ensure the bound of iteration sequences (see [23]).

Theorem 3.1 Assume that the infinite sequence { z k } is generated by Algorithm  3.1. Then

  1. (a)

    The sequences {H( z k )}, { μ k } and { ε k } converge to zero as k+, and hence any accumulation point of { x k } is a solution of (1.1).

  2. (b)

    If Assumption  4.1 is satisfied, then the sequence { z k } is bounded, hence there exists at least one accumulation point z =( μ , ε , x ) with H( z )=0 and x S.

Proof By Lemma 3.2, we know that {H( z k )} converges to h as k. Suppose that {H( z k )} does not converge to zero. Then, h >0 and { z k } is bounded by Lemma 3.2 and Lemma 3.3. Assume that z =( μ , ε , x ) is an accumulation point of { z k }. Without loss of generality, we assume that { z k } converges to z . Then, by the continuity of H and the definition of β(), we know that { μ k }, { ε k } and { β k } converge to μ , ε , β respectively and that h =H( z )>0. Therefore, by (3.4), we have

lim k λ k =0.

On the one hand, from Step 3 in Algorithm 3.1, we get

ψ ( z ˆ k + δ l k 1 Δ z ˜ k ) [ 1 σ ( 1 γ ( μ 0 + ε 0 ) ) δ l k 1 ] ψ ( z ˆ k ) ,

which implies that

ψ ( z ˆ k + δ l k 1 Δ z ˜ k ) ψ ( z ˆ k ) δ l k 1 σ ( 1 γ ( μ 0 + ε 0 ) ) ψ ( z ˆ k ) .
(3.23)

Letting k+, we have

H ( z ) T H ( z ) Δ z σ ( 1 γ ( μ 0 + ε 0 ) ) ψ ( z ˆ ) .
(3.24)

On the other hand, by (3.3), we have

H ( z ) + H ( z ) Δz=β ( z ) e 0 ,

i.e.,

H ( z ) T H ( z ) Δ z =ψ ( z ˆ ) +β ( z ) H ( z ) T e 0 .
(3.25)

Combining (3.24) and (3.25), we deduce that

[ 1 σ ( 1 γ ( μ 0 + ε 0 ) ) ] ψ ( z ) β ( z ) H ( z ) T e 0 β ( z ) μ 0 2 + ε 0 2 H ( z ) ,

which means

[ 1 σ ( 1 γ ( μ 0 + ε 0 ) ) ] ψ ( z ) β ( z ) μ 0 2 + ε 0 2 H ( z ) γ ( μ 0 + ε 0 ) ψ ( z ) .
(3.26)

Since H( z )>0, then

1σ ( 1 γ ( μ 0 + ε 0 ) ) γ( μ 0 + ε 0 ),

i.e.,

(1σ) ( 1 γ ( μ 0 + ε 0 ) ) 0.

This contradicts the fact that σ(0,1) and γ( μ 0 + ε 0 )<1. Hence, we have h =0, μ =0, ε =0. Thus, H( z )=0, that is, x is a solution of (1.1).

Next we prove (b). It follows from (a) that H( z k )0 as k. By (2.3), one has

lim k ε k =0, lim k μ k =0, lim k Φ ( z k ) =0.

Therefore, by the famous mountain pass theorem (Theorem 5.4 in [24]) and along the lines of the proof of Theorem 3.1 in [23], we obtain that { x k } is bounded and hence { z k } is. Thus, { z k } has at least one accumulation point z =( μ , ε , x ). By (a), we have H( z )=0 and μ =0, ε =0, x S. □

Next, we show the local superlinear convergence of Algorithm 3.1.

Theorem 3.2 Suppose that f is a continuously differentiable P 0 -function, Assumption  3.1 is satisfied and z is an accumulation point of the iteration sequence { z k } generated by Algorithm  3.1. If all VH( z ) are nonsingular and f (x) is locally Lipschitz continuous around x , then the whole sequence { z k } superlinearly converges to z , i.e.,

z k + 1 z =o ( z k z )

and

μ k + 1 =o( μ k ), ε k + 1 =o( ε k ).

Proof First, from Theorem 3.1, we know that z is a solution of H(z)=0. Then since all VH( z ) are nonsingular, it follows from [19, 25, 27] that for all z k sufficiently close to z , we have

H ( z k ) 1 C,

where C>0 is a constant.

Then, since H(z) is semi-smooth at z , H(z) is locally Lipschitz continuous near z , for all z k sufficiently close to z ,

ψ ( z k ) = H ( z k ) 2 = H ( z k ) H ( z ) 2 =O ( z k z 2 ) .
(3.27)

For all z k sufficiently close to z , we have

(3.28)

Thus, for z k sufficiently close to z , we obtain

ψ ( z k + Δ z k ) = H ( z k + Δ z k ) 2 = O ( z k Δ z k z 2 ) = o ( z k z 2 ) = o ( H ( z k ) H ( z ) 2 ) = o ( ψ ( z k ) ) .
(3.29)

Hence, for z k sufficiently close to z , we have z k + 1 = z k +Δ z k . By (3.28), we prove that

z k + 1 z =o ( z k z )

holds.

Next, when k is sufficiently large, then z k + 1 = z k +Δ z k , so

μ k + 1 = μ k +Δ μ k = β k H ( z k ) μ 0 ,

and

ε k + 1 = ε k +Δ ε k = β k H ( z k ) ε 0 .

Hence, for all k sufficiently large,

μ k + 1 =γ μ 0 H ( z k ) 2 , ε k + 1 =γ ε 0 H ( z k ) 2 ,

which, together with (3.29), yields

lim k + μ k + 1 μ k = lim k + H ( z k ) 2 H ( z k 1 ) 2 =0,

and

lim k + ε k + 1 ε k = lim k + H ( z k ) 2 H ( z k 1 ) 2 =0.

This means that μ k + 1 =o( μ k ), ε k + 1 =o( ε k ) and the desired result follows. □

4 Numerical experiments

In this section, we test our algorithm for solving the systems of inequalities. In our implementation, we adopt the strategy in [8], the function H defined by (2.3) is replaced by

H(z)=H(μ,ε,x)= ( μ ε ϕ ( μ , f ( x ) ) + c ε x ) ,
(4.1)

where c is a constant. It is easy to see that such a change does not destroy any theoretical results obtained in Section 3.

In our numerical experiments, the parameters used in the algorithm are chosen as follows: σ=0.06, δ=0.3, μ 0 =1, γ=0.01min{1,1/H( z 0 )}. The algorithm terminates when ψ( z k ) 10 3 . In the tables of test results, st denotes the starting point of x 0 , ic denotes the corrector iteration numbers in Step 3 followed directly from Step 1, ip denotes the predictor iteration numbers, iter denotes the iteration numbers of smoothing method (in [8]), cpu denotes the CPU time for solving the underlying problems in seconds, and sol denotes a solution of the test problem. In the following, we reveal a detailed description of the tested problems.

In the following, we reveal a detailed description of the tested problems. For Example 4.1, 4.2 and 4.3, we compare the results obtained by our method with which obtained by smoothing method [8]. The results are summarized in Table 1, Table 2 and Table 3.

Table 1 Numerical results of Example 4.1
Table 2 Numerical results of Example 4.2
Table 3 Numerical results of Example 4.3

Example 4.1 [9, 10]

Consider (1.1), where f= ( f 1 , f 2 ) with x R 2 and

f 1 (x)= x 1 2 + x 2 2 1, f 2 (x)= x 1 2 x 2 2 + ( 0.999 ) 2 .

Example 4.2 [9, 10]

Consider (1.1), where f= ( f 1 , f 2 ) with x R 2 and

f 1 (x)=sin( x 1 ), f 2 (x)=cos( x 2 ).

Example 4.3 [26]

Consider (1.1), where f= ( f 1 , f 2 ) with x R 2 and

f 1 (x)= x 1 0.7sin( x 1 )0.2cos( x 2 ), f 2 (x)= x 2 0.7cos( x 1 )+0.2sin( x 2 ).

5 Conclusion

In this paper, we present a new smoothing and regularization predictor-corrector algorithm to solve the nonlinear inequalities, the global and local convergence are obtained. Furthermore, the smoothing parameter μ and the regularization parameter ε in our algorithm are viewed as independent variables. Preliminary numerical results show the efficiency of the algorithm.

References

  1. Zakian V, Nail UA: Design of dynamical and control systems by the method of inequalities. Proc. Inst. Electr. Eng. 1973, 120: 1421–1472.

    Article  Google Scholar 

  2. Dennis JE, Schnabel RB: Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Prentice-Hall, Englewood Cliffs; 1983.

    Google Scholar 

  3. Neumaier A: Interval Methods for System of Equations. Cambridge University Press, Cambridge; 1990.

    Google Scholar 

  4. Daniel JW: Newtons method for nonlinear inequalities. Numer. Math. 1973, 21: 381–387.

    Article  MathSciNet  Google Scholar 

  5. Mayne DQ, Polak E, Heunis AJ: Solving nonlinear inequalities in a finite number of iterations. J. Optim. Theory Appl. 1981, 33: 207–221.

    Article  MathSciNet  Google Scholar 

  6. Sahba M: On the solution of nonlinear inequalities in a finite number of iterations. Numer. Math. 1985, 46: 229–236.

    Article  MathSciNet  Google Scholar 

  7. Yin HX, Huang ZH, Qi L: The convergence of a Levenberg-Marquardt method for nonlinear inequalities. Numer. Funct. Anal. Optim. 2008, 29: 687–716.

    Article  MathSciNet  Google Scholar 

  8. Huang ZH, Zhang Y, Wu W: A smoothing-type algorithm for solving system of inequalities. J. Comput. Appl. Math. 2008, 220: 355–363.

    Article  MathSciNet  Google Scholar 

  9. Zhu JG, Liu HW, Li XL:A regularized smoothing-type algorithm for solving a system of inequalities with a P 0 -function. J. Comput. Appl. Math. 2010, 233: 2611–2619.

    Article  MathSciNet  Google Scholar 

  10. He C, Ma CF: A smoothing self-adaptive Levenberg-Marquardt algorithm for solving system of nonlinear inequalities. Appl. Math. Comput. 2010, 216: 3056–3063.

    Article  MathSciNet  Google Scholar 

  11. Huang ZH, Qi L, Sun D:Sub-quadratic convergence of a smoothing Newton algorithm for the P 0 and monotone LCP. Math. Program. 2004, 99: 423–441.

    Article  MathSciNet  Google Scholar 

  12. Zhao N, Huang ZH:A nonmonotone smoothing Newton algorithm for solving box constrained variational inequalities with a P 0 function. J. Ind. Manag. Optim. 2011, 7(2):467–482.

    Article  MathSciNet  Google Scholar 

  13. Huang ZH, Han J, Chen Z:Predictor-corrector smoothing Newton method, based on a new smoothing function, for solving the nonlinear complementarity with a P 0 function. J. Optim. Theory Appl. 2003, 117: 39–68.

    Article  MathSciNet  Google Scholar 

  14. Herty M, Klar A, Singh AK, Spellucci P: Smoothed penalty algorithms for optimization of nonlinear models. Comput. Optim. Appl. 2007, 37: 157–176.

    Article  MathSciNet  Google Scholar 

  15. Burke J, Xu S: A noninterior predictor-corrector path-following algorithm for the monotone linear complementarity problem. Math. Program. 2000, 87: 113–130.

    MathSciNet  Google Scholar 

  16. Luca TD, Facchinei F, Kanzow C: A semismooth equation approach to the solution of nonlinear complementarity problems. Math. Program. 1996, 75: 407–439.

    Google Scholar 

  17. Clarke FH: Optimization and Nonsmooth Analysis. Wiley, New York; 1983.

    Google Scholar 

  18. Mifflin R: Semismooth and semiconvex functions in constrained optimization. SIAM J. Control Optim. 1977, 15(6):957–972.

    Article  MathSciNet  Google Scholar 

  19. Qi L, Sun J: A nonsmooth version of Newton’s method. Math. Program. 1993, 58(3):353–367.

    Article  MathSciNet  Google Scholar 

  20. Lee YJ, Mangasarin OL: SSVM: A smooth support vector machine for classification. Comput. Optim. Appl. 2001, 20: 5–22.

    Article  MathSciNet  Google Scholar 

  21. Che H, Li M: A smoothing and regularization Broyden-like method for nonlinear inequalities. J. Appl. Math. Comput. 2012. doi:10.1007/s12190–012–0588–2

    Google Scholar 

  22. Chen B, Harker PT: A non-interior-point continuation method for linear complementarity problems. SIAM J. Matrix Anal. Appl. 1993, 14: 1168–1190.

    Article  MathSciNet  Google Scholar 

  23. Huang ZH, Han J, Xu D, Zhang L: The non-linear continuation methods for solving the P0-functions non-linear complementarity problem. Sci. China 2001, 44(2):1107–1114.

    Article  MathSciNet  Google Scholar 

  24. Facchinei F, Kanzow C: Beyond monotonicity in regularization methods for nonlinear complementarity problems. SIAM J. Control Optim. 1999, 37(2):1150–1161.

    Article  MathSciNet  Google Scholar 

  25. Qi L: Convergence analysis of some algorithms for solving nonsmooth equations. Math. Oper. Res. 1993, 18: 227–244.

    Article  MathSciNet  Google Scholar 

  26. Zhang Y, Huang ZH: A nonmonotone smoothing-type algorithm for solving a system of equalities and inequalities. J. Comput. Appl. Math. 2010, 233: 2312–2321.

    Article  MathSciNet  Google Scholar 

  27. Qi L, Sun DF, Zhou GL: A new look at smoothing Newton methods for nonlinear complementarity problems and box constrained variational inequalities. Math. Program., Ser. A 2000, 87: 1–35.

    MathSciNet  Google Scholar 

Download references

Acknowledgements

This research was supported by the Natural Science Foundation of China (Grant Nos. 11171180, 11171193, 11126233, 10901096) and the fund of Natural Science of Shandong Province (Grant Nos. ZR2009AL019, ZR2011AM016). The authors are in debt to the anonymous referees for their numerous insightful comments and constructive suggestions which help improve the presentation of the article. The authors thank Prof. Yiju Wang for his careful reading of the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Haitao Che.

Additional information

Competing interests

We declare that we have no competing interests.

Authors’ contributions

The author carried out the proof. The author conceived of the study and participated in its design and coordination. The author read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Che, H. A smoothing and regularization predictor-corrector method for nonlinear inequalities. J Inequal Appl 2012, 214 (2012). https://doi.org/10.1186/1029-242X-2012-214

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2012-214

Keywords