Skip to main content

Kernel function based interior-point methods for horizontal linear complementarity problems

Abstract

It is well known that each kernel function defines an interior-point algorithm. In this paper we propose new classes of kernel functions whose form is different from known kernel functions and define interior-point methods (IPMs) based on these functions whose barrier term is exponential power of exponential functions for P (κ)-horizontal linear complementarity problems (HLCPs). New search directions and proximity measures are defined by these kernel functions. We obtain so far the best known complexity results for large- and small-update methods.

1 Introduction

In this paper we consider P (κ)-horizontal linear complementarity problem (HLCP) as follows.

Given {M,N}, a P (κ)-pair, M,N R n × n , q R n , and κ0, find a pair (x;s) R 2 n such that

Mx+Ns=q,xs=0,(x;s)0.
(1)

Note that {M,N} is called a P (κ)-pair if Mx+Ns=0 implies that

(1+4κ) i I + ( x ) x i s i + i I ( x ) x i s i 0,

where I + (x):={iI: x i s i 0}, I (x):={iI: x i s i <0}, and I:={1,2,,n}.

P (κ)-HLCPs have many applications in economic equilibrium problems, noncooperative games, traffic assignment problems, and optimization problems [1, 2]. P (κ)-HLCP (1) includes the standard linear complementarity problem (LCP), linear, and quadratic optimization problems. Indeed, when N is nonsingular, then P (κ)-HLCP reduces to P (κ)-LCP. Furthermore, when κ=0, P (0)-HLCP is monotone LCP.

Recently, Bai et al. [3] defined the concept of eligible kernel functions which require four conditions and proposed primal-dual IPMs for linear optimization (LO) problems based on these functions, and some of these methods achieved the best known complexity results for both large- and small-update methods. Cho [4] and Cho et al. [5] extended these algorithms for LO to P (κ)-linear complementarity problems (LCPs) and obtained the similar complexity results as LO problems for large-update methods. Amini et al. [6, 7] introduced new IPMs based on parametric versions of kernel functions in [3] and obtained the better iteration bounds than the bound of the algorithm in [3] with numerical tests. Wang et al. [2] generalized polynomial IPMs for LO problem to P (κ)-HLCP based on a finite kernel function, which was first defined in [8], and obtained the same iteration bounds for large- and small-update methods as an LO problem. Ghami et al. [9] extended IPMs for LO problems to the P (κ)-LCPs based on eligible kernel functions, which were defined in [3], and proposed large- as well as small-update methods. Lesaja et al. [10] also proposed IPMs for P (κ)-LCPs based on ten kernel functions which were defined for LO problems. Ghami et al. [11] proposed IPM for an LO problem based on a kernel function whose barrier term is a trigonometric function. However, this method does not have the best known iteration bound for a large-update method. Cho et al. [12] defined a new kernel function, whose barrier term is the exponential power of the exponential function for LO problems, and obtained the best known iteration bounds for large- and small-update methods.

Motivated by these works, we introduce new classes of eligible kernel functions, which are different from known kernel functions in [3, 6, 7] and have the exponential power of exponential barrier term, and propose a complexity analysis of the IPMs for P (κ)-HLCP based on these kernel functions. We show that these algorithms have O((1+2κ) n lognlog n μ 0 ϵ ) and O((1+2κ) n log n μ 0 ϵ ) iteration bounds for large- and small-update methods, respectively, which are currently the best known iteration bounds for such methods.

The paper is organized as follows. In Section 2 we propose some basic concepts and a generic interior point algorithm for P (κ)-HLCP. In Section 3 we introduce new classes of eligible kernel functions and their technical properties. Finally, we derive the framework for analyzing the iteration bounds and the complexity results of the algorithms based on these kernel functions in Section 4.

Notational conventions: R + n and R + + n denote the sets of n-dimensional nonnegative vectors and positive vectors, respectively. For x,s R n , x min , xs, and (x;s) denote the smallest component of the vector x, the componentwise product of the vectors x and s, and the column vector ( x T , s T ) T , respectively. We denote by D the diagonal matrix from a vector d, i.e., D=diag(d). e denotes the n-dimensional vector of ones. For f(x), g(x): R + + R + + , f(x)=O(g(x)) if f(x) c 1 g(x) for some positive constant c 1 and f(x)=Θ(g(x)) if c 2 g(x)f(x) c 3 g(x) for some positive constants c 2 and c 3 .

2 Preliminaries

In this section we recall some basic definitions and introduce a generic interior point algorithm for P (κ)-HLCP.

Definition 2.1 [13]

Let M R n × n , x R n , and κ0.

  1. (i)

    M is called a positive semidefinite matrix if x T (Mx)0.

  2. (ii)

    M is called a P 0 -matrix if there exists an index iI such that x i 0 and x i [ M x ] i 0.

  3. (iii)

    M is called a P (κ)-matrix if

    (1+4κ) i I + ( x ) x i [ M x ] i + i I ( x ) x i [ M x ] i 0,

where [ M x ] i denotes the i th component of the vector Mx, I + (x)={iI: x i [ M x ] i 0}, and I (x)={iI: x i [ M x ] i <0}.

Definition 2.2 [14]

Let M,N R n × n , x,s R n , and κ0.

  1. (i)

    {M,N} is called a monotone pair if Mx+Ns=0 implies x T s0.

  2. (ii)

    {M,N} is called a P 0 -pair if Mx+Ns=0 and (x;s)0 implies that there exists an index iI such that x i 0 or s i 0, and x i s i 0.

  3. (iii)

    {M,N} is called a P (κ)-pair if Mx+Ns=0 implies that x T s4κ i I + x i s i , where I + (x)={iI: x i s i 0}.

Lemma 2.3 If {M,N} is a P 0 -pair, then

M = ( M N S X )

is a nonsingular matrix for any positive diagonal matrices X,S R n × n .

Proof Assume that the matrix M is singular. Then M ζ=0 for some nonzero ζ=(ξ;η) R 2 n , i.e., Mξ+Nη=0 and s i ξ i + x i η i =0, iI. Hence (ξ;η)0, and we have an index iI such that ξ i 0 or η i 0, and ξ i η i 0, since {M,N} is a P 0 -pair. On the other hand, ξ i η i = x i ( η i ) 2 / s i <0. This is a contradiction. This completes the proof. □

Since the class of P 0 -pairs includes the class of P (κ)-pairs, we obtain the following corollary.

Corollary 2.4 Let {M,N} be a P (κ)-pair and x,s R + + n . Then all c R n the system

MΔx+NΔs=0,SΔx+XΔs=c

has a unique solution (Δx;Δs).

The basic idea of generic IPMs is to replace the second equation of (1) by the parameterized equation xs=μe with μ>0, i.e., we consider the following system:

Mx+Ns=q,xs=μe,(x;s)>0.
(2)

Without loss of generality, we assume that (1) satisfies the interior-point condition (IPC), i.e., there exists ( x 0 ; s 0 )>0 such that M x 0 +N s 0 =q [15]. Since {M,N} is a P (κ)-pair and (1) satisfies IPC, the system (2) has a unique solution (x(μ);s(μ)) for each μ>0, which is called the μ-center. The set of μ-centers is called the central path of (1). The limit of the central path exists, and since the limit point satisfies (1), it naturally yields the solution for (1) [16]. IPMs follow this central path approximately and approach the solution of (1) as μ0.

For given (x;s):=( x 0 ; s 0 ), by applying Newton’s method to the system (2), we have the Newton-system as follows:

MΔx+NΔs=0,SΔx+XΔs=μexs.
(3)

By taking a step along the search direction (Δx;Δs), we define a new iteration ( x + ; s + ), where for some α0,

x + :=x+αΔx, s + :=s+αΔs.
(4)

To have the motivation of a new algorithm, we define the following scaled vectors:

v:= x s μ ,d:= x s , d x := v Δ x x , d s := v Δ s s .
(5)

Using (5), we can rewrite the Newton-system (3) as follows:

M ¯ d x + N ¯ d s =0, d x + d s = v 1 v,
(6)

where M ¯ :=DMD, N ¯ :=DND, and D:=diag(d). Note that the right-hand side of the second equation of (6) equals the negative gradient of the logarithmic barrier function Ψ l (v):= i = 1 n ψ l ( v i ) and ψ l (t)= t 2 1 2 logt, i.e.,

d x + d s = Ψ l (v).
(7)

The interior-point algorithm works as follows. Assume that we are given a strictly feasible point (x;s) which is in a τ-neighborhood of the given μ-center. Then we update μ to μ + =(1θ)μ for some fixed θ(0,1) and solve the system (3) to obtain the search direction. The positivity condition of a new iteration is ensured with the right choice of the step size α. This procedure is repeated until we find a new iteration ( x + ; s + ) that is in a τ-neighborhood of the μ + -center and then we let μ:= μ + and (x;s):=( x + ; s + ). We repeat the process until nμ<ε (see Algorithm 1).

Algorithm 1
figure 1

Generic interior-point algorithm for P (κ)-HLCP

If τ=O(n) and θ=Θ(1), then the algorithm is called a large-update method. When τ=O(1) and θ=Θ( 1 n ), we call the algorithm a small-update method.

3 New kernel function

In this section we define new classes of kernel functions and give their essential properties.

ψ: R + + R + is called a kernel function if ψ is twice differentiable and satisfies the following conditions:

ψ (1)=ψ(1)=0, ψ (t)>0,t>0, lim t 0 + ψ(t)= lim t ψ(t)=.
(8)

We define new classes of kernel functions ψ j (t), j{1,2}, in Table 1 and give the first three derivatives of ψ j (t), j{1,2}, in Table 2 and Table 3.

Table 1 Kernel functions
Table 2 The first two derivatives of the kernel functions
Table 3 The third derivative of the kernel functions

In the following lemma, we show that ψ(t):= ψ j (t), j{1,2}, are eligible [3].

Lemma 3.1 Let ψ(t):= ψ j (t), j{1,2}, be defined as in Table 1. Then ψ j , j{1,2}, satisfy the following eligible conditions:

  1. (a)

    t ψ (t)+ ψ (t)>0, t>0, i.e., ψ is exponential convex,

  2. (b)

    t ψ (t) ψ (t)>0, t>0,

  3. (c)

    ψ j ( 3 ) (t)<0, t>0,

  4. (d)

    2 ( ψ ( t ) ) 2 ψ (t) ψ j ( 3 ) (t)>0, t>0.

Proof From Table 4, Table 3, and Table 5, we show that ψ j (t), j{1,2}, satisfy eligible conditions (a)-(d).  □

Table 4 Conditions (a) and (b)
Table 5 Condition (d)

Remark 3.2 For ψ j (t), j{1,2}, let ψ b 1 (t)= ψ 1 (t) e ( t 2 1 ) 2 , ψ b 2 (t)= ψ 2 (t) t 2 1 2 .

From Table 2,

ψ 1 (t)e, ψ 2 (t)1,t>0.
(9)

Since ψ b j (t)<0, j{1,2}, from Table 2, ψ b j (t), j{1,2}, are monotonically decreasing with respect to t>0.

Let ρ j :[0,)(0,1] and ϱ j :[0,)[1,) denote the inverse functions of the restriction of 1 2 ψ j (t) for 0<t1 and ψ j (t) for t1, respectively, j{1,2}. Then

z= 1 2 ψ j (t)t= ρ j (z),0<t1,
(10)

and

u= ψ j (t)t= ϱ j (u),t1.
(11)

Lemma 3.3 Let ρ j (z), j{1,2}, be defined as in (10). Then we have, for p1, r1,

  1. (i)

    ρ 1 (z) ( log ( e + p 1 log ( e + 2 z ) ) ) 1 r , z0,

  2. (ii)

    ρ 2 (z) ( 1 + log ( 1 + p 1 log ( 1 + 2 z ) ) ) 1 r , z0.

Proof For (i), using (10) and Table 2, we have the equation

et+ e p ( g 1 ( t ) e ) g 1 (t) t r 1 =2z, g 1 (t)= e t r ,0<t1.

Since 0<t1,

e p ( g 1 ( t ) e ) g 1 (t) t r 1 =et+2ze+2z, g 1 (t)= e t r .
(12)

By taking the natural logarithm on both sides of (12), we have e t r e+ p 1 log(e+2z). Hence we have

ρ 1 (z) ( log ( e + p 1 log ( e + 2 z ) ) ) 1 r .

By the same way as (i), we obtain the result (ii). This completes the proof. □

Lemma 3.4 Let ψ j (t), j{1,2}, be defined as in Table 1. Then we have

  1. (i)

    e 2 ( t 1 ) 2 ψ 1 (t) 1 2 e ( ψ 1 ( t ) ) 2 , t>0,

  2. (ii)

    1 2 ( t 1 ) 2 ψ 2 (t) 1 2 ( ψ 2 ( t ) ) 2 , t>0.

Proof For (i), using the first condition of (8) and (9), we have

ψ 1 (t)= 1 t 1 ξ ψ 1 (ζ)dζdξe 1 t 1 ξ dζdξ= e 2 ( t 1 ) 2 ,

which proves the first inequality. The second inequality is obtained as follows:

ψ 1 ( t ) = 1 t 1 ξ ψ 1 ( ζ ) d ζ d ξ 1 e 1 t 1 ξ ψ 1 ( ξ ) ψ 1 ( ζ ) d ζ d ξ = 1 e 1 t ψ 1 ( ξ ) ψ 1 ( ξ ) d ξ = 1 e 1 t ψ 1 ( ξ ) d ψ 1 ( ξ ) = 1 2 e ( ψ 1 ( t ) ) 2 .

For (ii), by the same way as above, we obtain the result. This completes the proof. □

Lemma 3.5 Let ϱ j (u), j{1,2}, be defined as in (11). Then we have

  1. (i)

    ϱ 1 (u)1+ 2 u e , u0,

  2. (ii)

    ϱ 2 (u)1+ 2 u , u0.

Proof For (i), using the first inequality in Lemma 3.4, we have u= ψ 1 (t) e 2 ( t 1 ) 2 . Then we have

t= ϱ 1 (u)1+ 2 u e ,u0.

Similarly, we obtain the result (ii). This completes the proof. □

In this paper we replace the logarithmic barrier function Ψ l (v) in (7) by a strictly convex function Ψ(v) as follows:

d x + d s =Ψ(v),
(13)

where

Ψ(v):= Ψ j (v)= i = 1 n ψ j ( v i ),j{1,2},
(14)

and ψ j (t), j{1,2}, are defined in Table 1. Since Ψ(v) is strictly convex and minimal at v=e, we have

Ψ(v)=0v=ex=x(μ),s=s(μ).

Using (5) and (13), we modify the Newton-system (3) as follows:

MΔx+NΔs=0,SΔx+XΔs=μvΨ(v).
(15)

By Corollary 2.4, the system (15) has a unique solution (Δx;Δs) which is the modified Newton search direction. Consequently, we use Ψ(v) as the proximity function to find a search direction and to measure the proximity between the current iteration and the μ-center. We also define the norm-based proximity measure δ j (v), j{1,2}, as follows:

δ j (v):= 1 2 Ψ j ( v ) = 1 2 d x + d s .
(16)

The following lemma gives a relation between two proximity measures.

Lemma 3.6 Let δ j (v) and Ψ j (v), j{1,2}, be defined as in (16) and (14), respectively. Then we have

  1. (i)

    δ 1 (v) e Ψ 1 ( v ) 2 ,

  2. (ii)

    δ 2 (v) Ψ 2 ( v ) 2 .

Proof For (i), using (16) and the second inequality in Lemma 3.4, we have

δ 1 2 (v)= 1 4 Ψ 1 ( v ) 2 = 1 4 i = 1 n ( ψ 1 ( v i ) ) 2 e Ψ 1 ( v ) 2 .

Hence we have δ 1 (v) e Ψ 1 ( v ) 2 .

For (ii), by the same way as above, we obtain the result. This completes the proof. □

Using the eligible conditions (b) and (c) in Lemma 3.1, we obtain the following lemma.

Lemma 3.7 (Theorem 3.2 in [3])

Let ϱ j , j{1,2}, be defined as in (11). Then we have

Ψ j (βv)nψ ( β ϱ j ( Ψ ( v ) n ) ) ,v R + + ,β1.

In the following lemma, we give upper bounds of Ψ j (v), j{1,2}, after a μ-update.

Lemma 3.8 Let Ψ j (v), j{1,2}, be defined as in (14), 0<θ<1, and v + = v 1 θ . If Ψ j (v)τ, j{1,2}, then we have

  1. (i)

    Ψ 1 ( v + ) e n θ + 2 τ + 2 2 e n τ 2 ( 1 θ ) or Ψ 1 ( v + ) ψ 1 ( 1 ) ( 2 τ e + θ n ) 2 2 ( 1 θ ) ,

  2. (ii)

    Ψ 2 ( v + ) n θ + 2 τ + 2 2 n τ 2 ( 1 θ ) or Ψ 2 ( v + ) ψ 2 ( 1 ) ( 2 τ + θ n ) 2 2 ( 1 θ ) .

Proof For the first inequality of (i), using Remark 3.2 with ψ b 1 (1)=0 and ψ b 1 (t)<0, we get

ψ 1 (t) e ( t 2 1 ) 2 ,t1.
(17)

Using Lemma 3.7, (17), and Lemma 3.5(i), we have

Ψ 1 ( v + ) e n 2 ( ϱ 1 2 ( τ n ) 1 θ 1 ) e n 2 ( ( 1 + 2 τ e n ) 2 1 θ 1 ) = e n θ + 2 τ + 2 2 e n τ 2 ( 1 θ ) .

For the second inequality of (i), using Taylor’s theorem, ψ 1 (1)= ψ 1 (1)=0 and ψ 1 ( 3 ) (t)<0, we have

ψ 1 ( t ) = ψ 1 ( 1 ) + ψ 1 ( 1 ) ( t 1 ) + 1 2 ψ 1 ( 1 ) ( t 1 ) 2 + 1 3 ! ψ 1 ( 3 ) ( ξ ) ( t 1 ) 3 = 1 2 ψ 1 ( 1 ) ( t 1 ) 2 + 1 3 ! ψ 1 ( 3 ) ( ξ ) ( t 1 ) 3 < ψ 1 ( 1 ) 2 ( t 1 ) 2
(18)

for some ξ, 1ξt. Since 1 1 θ 1 and ϱ 1 ( τ n )1, we have ϱ 1 ( τ n ) 1 θ 1. Using Lemma 3.7, (18), and Lemma 3.5(i), we have

Ψ 1 ( v + ) n ψ 1 ( 1 ) 2 ( ϱ 1 ( τ n ) 1 θ 1 ) 2 n ψ 1 ( 1 ) 2 ( 1 + 2 τ e n 1 θ 1 θ ) 2 n ψ 1 ( 1 ) 2 ( 2 τ e n + θ 1 θ ) 2 = ψ 1 ( 1 ) 2 ( 1 θ ) ( 2 τ e + θ n ) 2 ,

where the last inequality holds from 1 1 θ = θ 1 + 1 θ θ, 0<θ<1.

By the same way as the proof of (i), we obtain the result (ii). This completes the proof. □

Define

Ψ ¯ 1 , 0 := e n θ + 2 τ + 2 2 e n τ 2 ( 1 θ ) , Ψ ˜ 1 , 0 := ψ 1 ( 1 ) 2 ( 1 θ ) ( 2 τ e + θ n ) 2
(19)

and

Ψ ¯ 2 , 0 := n θ + 2 τ + 2 2 n τ 2 ( 1 θ ) , Ψ ˜ 2 , 0 := ψ 2 ( 1 ) 2 ( 1 θ ) ( 2 τ + θ n ) 2 .
(20)

We will use Ψ ¯ j , 0 and Ψ ˜ j , 0 for the upper bounds of Ψ j (v) from (14) for large- and small-update methods, respectively, j{1,2}.

Remark 3.9 For the large-update method with τ=O(n) and θ=Θ(1), Ψ ¯ j , 0 =O(n), j{1,2}, and for the small-update method with τ=O(1) and θ=Θ( 1 n ), Ψ ˜ j , 0 =O( ψ j (1)), j{1,2}.

For fixed μ, if we take a step size α, using (4) and (5), we have new iterations

x + =x ( e + α Δ x x ) =x ( e + α d x v ) = x v (v+α d x )

and

s + =s ( e + α Δ s s ) =s ( e + α d s v ) = s v (v+α d s ).

For fixed μ>0,

v + := x + s + μ = ( v + α d x ) ( v + α d s ) .

For notational convenience, let Ψ(v):= Ψ j (v) and ψ(t):= ψ j (t), j{1,2}.

For α>0, we define

f(α):=Ψ( v + )Ψ(v),

where f(α) is the difference of proximities between a new iteration and a current iteration for fixed μ. By the condition (a) in Lemma 3.1, we have

Ψ( v + )=Ψ ( ( v + α d x ) ( v + α d s ) ) 1 2 ( Ψ ( v + α d x ) + Ψ ( v + α d s ) ) .

Hence we have f(α) f 1 (α), where

f 1 (α):= 1 2 ( Ψ ( v + α d x ) + Ψ ( v + α d s ) ) Ψ(v).

Then, we have f(0)= f 1 (0)=0. Differentiating f 1 (α) with respect to α, we have

f 1 (α)= 1 2 i = 1 n ( ψ ( v i + α [ d x ] i ) [ d x ] i + ψ ( v i + α [ d s ] i ) [ d s ] i ) ,

where [ d x ] i and [ d s ] i denote the i th components of the vectors d x and d s , respectively. Using (13) and (16), we have

f 1 (0)= 1 2 Ψ ( v ) T ( d x + d s )= 1 2 Ψ ( v ) T Ψ(v)=2 δ 2 (v).

By taking the derivative of f 1 (α) with respect to α, we have

f 1 (α)= 1 2 i = 1 n ( ψ ( v i + α [ d x ] i ) [ d x ] i 2 + ψ ( v i + α [ d s ] i ) [ d s ] i 2 ) .

Since f 1 (α)>0, f 1 (α) is strictly convex in α unless d x = d s =0. Since {M,N} is a P (κ)-pair and MΔx+NΔs=0 from (15), for (Δx;Δs) R 2 n ,

(1+4κ) i I + [ Δ x ] i [ Δ s ] i + i I [ Δ x ] i [ Δ s ] i 0,

where I + ={iI: [ Δ x ] i [ Δ s ] i 0}, I =I I + . Since d x d s = v 2 Δ x Δ s x s = Δ x Δ s μ and μ>0, we have

(1+4κ) i I + [ d x ] i [ d s ] i + i I [ d x ] i [ d s ] i 0.

For notational convenience, we denote Ψ:= Ψ j (v) and δ:= δ j (v), j{1,2}.

In the following lemmas, we state same technical properties in [5].

Lemma 3.10 (Lemma 4.4 in [5])

f 1 (α)0 if α satisfies

ψ ( v min 2αδ 1 + 2 κ )+ ψ ( v min ) 2 δ 1 + 2 κ .
(21)

Lemma 3.11 (Lemma 4.5 in [5])

Let ρ:= ρ j (δ), j{1,2}, be defined as in (10). Then, in the worst case, the largest step size α satisfying (21) is given by

α ¯ := 1 2 δ 1 + 2 κ ( ρ ( δ ) ρ ( ( 1 + 1 1 + 2 κ ) δ ) ) .

Lemma 3.12 (Lemma 4.6 in [5])

Let ρ and α ¯ be defined as in Lemma  3.11. Then

α ¯ 1 ( 1 + 2 κ ) ψ ( ρ ( ( 1 + 1 1 + 2 κ ) δ ) ) .

Define

α ˜ := 1 ( 1 + 2 κ ) ψ ( ρ ( ( 1 + 1 1 + 2 κ ) δ ) ) .
(22)

Then we have α ˜ α ¯ .

Lemma 3.13 Let α ˜ be defined as in (22). Then for κ0, we have

f( α ˜ ) δ 2 ( 1 + 2 κ ) ψ ( ρ ( ( 1 + 1 1 + 2 κ ) δ ) ) .

Lemma 3.14 (Lemma 4.10 in [5])

The right-hand sides in Lemma  3.13 are monotonically decreasing with respect to δ.

Lemma 3.15 (Proposition 1.3.2 in [17])

Let t 0 , t 1 ,, t K be a sequence of positive numbers such that

t k + 1 t k λ t k 1 γ ,k=0,1,,K,

where λ>0 and 0<γ1. Then K t 0 γ λ γ .

We define the value of Ψ(v) after the μ-update as Ψ 0 , and the subsequent values in the same outer iteration are denoted as Ψ k , k=1,2, . Then we have

Ψ K 1 >τ,0 Ψ K τ.

Theorem 3.16 Let a P (κ)-HLCP be given. If τ1, then the upper bound of a total number of iterations is given by

Ψ 0 γ θ λ γ log n μ 0 ϵ .

Proof From Lemma 3.15 and Lemma II. 17 in [18], the number of inner and outer iterations is given by Ψ 0 γ λ γ and 1 θ log n μ 0 ϵ , respectively. For the total number of iterations, we multiply the number of inner iterations by that of outer iterations. Hence we have the desired results. This completes the proof. □

4 Application to new kernel functions

For the complexity analysis, we follow a similar framework in [3] for LO problems.

We apply the framework in Table 6 to the specific kernel function

ψ 1 (t)= e ( t 2 1 ) 2 + e p ( g 1 ( t ) e ) 1 p r , g 1 (t)= e t r ,p1,r1.
Table 6 Framework for analyzing the iteration bounds

Step 1: Using Lemma 3.3, ρ 1 (z) ( log ( e + p 1 log ( e + 2 z ) ) ) 1 r , z0.

Step 2: By Lemma 3.5, the inverse function of ψ 1 (t) for t1 satisfies

ϱ 1 (u)1+ 2 u e ,u0.

Step 3: Using Lemma 3.6, we obtain

δ 1 (v) e Ψ 1 ( v ) 2 ,v>0.

Step 4: Using (19) and ψ 1 (1)=e(pre+2r+2) from Table 2, we have the following:

  1. (i)

    For the large-update method, Ψ 0 e n θ + 2 τ + 2 2 e n τ 2 ( 1 θ ) := Ψ ¯ 1 , 0 .

  2. (ii)

    For the small-update method, Ψ 0 e ( p r e + 2 r + 2 ) ( 2 τ e + θ n ) 2 2 ( 1 θ ) := Ψ ˜ 1 , 0 .

Step 5: Define L 1 ( Ψ 1 ,p):=e+ p 1 log(e+2 2 e Ψ 1 ). Using ψ 1 ( 3 ) (t)<0, Step 1, 1+ 1 1 + 2 κ 2, and Table 2, we have

(23)

where the last inequality follows from the assumption Ψ 1 τ1. Using Lemma 3.13, Lemma 3.14, Lemma 3.6, and (23), we have

f ( α ˜ ) δ 2 ( 1 + 2 κ ) ψ 1 ( ρ 1 ( ( 1 + 1 1 + 2 κ ) δ ) ) e Ψ 1 2 ( 1 + 2 κ ) ψ 1 ( ρ 1 ( ( 1 + 1 1 + 2 κ ) e Ψ 1 2 ) ) e Ψ 1 2 4 ( 1 + 2 κ ) e Ψ 1 L 1 ( Ψ 1 , p ) ( log L 1 ( Ψ 1 , p ) ) 2 ( r + 1 ) r ( p r L 1 ( Ψ 1 , p ) + 2 r + 1 ) = Ψ 1 8 ( 1 + 2 κ ) L 1 ( Ψ 1 , p ) ( log L 1 ( Ψ 1 , p ) ) 2 ( r + 1 ) r ( p r L 1 ( Ψ 1 , p ) + 2 r + 1 ) Ψ 1 8 ( 1 + 2 κ ) L 1 ( Ψ 1 , 0 , p ) ( log L 1 ( Ψ 1 , 0 , p ) ) 2 ( r + 1 ) r ( p r L 1 ( Ψ 1 , 0 , p ) + 2 r + 1 ) ,

where the last inequality follows from L 1 ( Ψ 1 , 0 ,p):=e+ p 1 log(e+2 2 e Ψ 1 , 0 ) and the assumption Ψ 1 , 0 Ψ 1 .

Step 6: Using Theorem 3.16, Step 4 with Ψ 1 , 0 Ψ ¯ 1 , 0 , and Ψ 1 , 0 Ψ ˜ 1 , 0 , and Step 5 with γ= 1 2 and 1 λ =8(1+2κ) L 1 ( Ψ 1 , 0 ,p) ( log L 1 ( Ψ 1 , 0 , p ) ) 2 ( r + 1 ) r (pr L 1 ( Ψ 1 , 0 ,p)+2r+1), we have the upper bounds of the total number of iterations for large- and small-update methods as follows.

  1. (i)

    For large-update methods,

    8 ( 1 + 2 κ ) L 1 ( Ψ ¯ 1 , 0 , p ) ( log L 1 ( Ψ ¯ 1 , 0 , p ) ) 2 ( r + 1 ) r ( p r L 1 ( Ψ ¯ 1 , 0 , p ) + 2 r + 1 ) Ψ ¯ 1 , 0 1 2 1 θ log n μ 0 ϵ ,

    where L 1 ( Ψ ¯ 1 , 0 ,p):=e+ p 1 log(e+2 2 e Ψ ¯ 1 , 0 ).

  2. (ii)

    For small-update methods,

    8 ( 1 + 2 κ ) L 1 ( Ψ ˜ 1 , 0 , p ) ( log L 1 ( Ψ ˜ 1 , 0 , p ) ) 2 ( r + 1 ) r ( p r L 1 ( Ψ ˜ 1 , 0 , p ) + 2 r + 1 ) Ψ ˜ 1 , 0 1 2 1 θ log n μ 0 ϵ ,

where L 1 ( Ψ ˜ 1 , 0 ,p):=e+ p 1 log(e+2 2 e Ψ ˜ 1 , 0 ).

Step 7: Using Step 6 and Remark 3.9, for the large-update method with p=log(e+2 2 e Ψ ¯ 1 , 0 )=O(logn) and r=1, the algorithm has O((1+2κ) n lognlog n μ 0 ϵ ) complexity. For the small-update method with p=1 and r=1, the algorithm has O((1+2κ) n log n μ 0 ϵ ) complexity. These are currently the best known complexity results.

Remark 4.1 For the kernel function ψ 2 (t) in Table 1, by applying the framework, the algorithms have 8(1+2κ) L 2 ( Ψ ¯ 2 , 0 ,p) ( log L 2 ( Ψ ¯ 2 , 0 , p ) ) 2 ( r + 1 ) r (pr L 2 ( Ψ ¯ 2 , 0 ,p)+2r+1) Ψ ¯ 2 , 0 1 2 1 θ log n μ 0 ϵ and 8(1+2κ) L 2 ( Ψ ˜ 2 , 0 ,p) ( log L 2 ( Ψ ˜ 2 , 0 , p ) ) 2 ( r + 1 ) r (pr L 2 ( Ψ ˜ 2 , 0 ,p)+2r+1) Ψ ˜ 2 , 0 1 2 1 θ log n μ 0 ϵ iteration bounds for large- and small-update methods, respectively, where L 2 ( Ψ ¯ 2 , 0 ,p):=1+ p 1 log(1+2 2 Ψ ¯ 2 , 0 ) and L 2 ( Ψ ˜ 2 , 0 ,p):=1+ p 1 log(1+2 2 Ψ ˜ 1 , 0 ). By taking p=log(1+2 2 Ψ ¯ 2 , 0 )=O(logn) and r=1, the algorithm has O((1+2κ) n lognlog n μ 0 ϵ ) complexity for large-update methods. Choosing p=1 and r=1, the algorithm has O((1+2κ) n log n μ 0 ϵ ) for small-update methods. In conclusion, we obtain so far the best known iteration bounds of large- and small-update methods for kernel functions ψ j , j{1,2}, in Table 1.

References

  1. Cottle RW, Pang JS, Stone RE: The Linear Complementarity Problem. Academic Press, San Diego; 1992.

    Google Scholar 

  2. Wang GQ, Bai YQ:Polynomial interior-point algorithm for P (κ) horizontal linear complementarity problem. J. Comput. Appl. Math. 2009, 233: 248–263. 10.1016/j.cam.2009.07.014

    Article  MathSciNet  Google Scholar 

  3. Bai YQ, Ghami ME, Roos C: A comparative study of kernel functions for primal-dual interior-point algorithms in linear optimization. SIAM J. Optim. 2004, 15: 101–128. 10.1137/S1052623403423114

    Article  MathSciNet  Google Scholar 

  4. Cho GM:A new large-update interior point algorithm for P (κ) linear complementarity problems. J. Comput. Appl. Math. 2008, 216: 256–278.

    Article  Google Scholar 

  5. Cho GM, Kim MK:A new large-update interior point algorithm for P (κ) LCPs based on kernel functions. Appl. Math. Comput. 2006, 182: 1169–1183. 10.1016/j.amc.2006.04.060

    Article  MathSciNet  Google Scholar 

  6. Amini K, Haseli A: A new proximity function generating the best known iteration bounds for both large-update and small-update interior-point methods. ANZIAM J. 2007, 49: 259–270. 10.1017/S1446181100012827

    Article  MathSciNet  Google Scholar 

  7. Amini K, Peyghami MR:Exploring complexity of large update interior-point methods for P (κ) linear complementarity problem based on kernel function. Appl. Math. Comput. 2009, 207: 501–513. 10.1016/j.amc.2008.11.002

    Article  MathSciNet  Google Scholar 

  8. Bai YQ, Ghami ME, Roos C: A new efficient large-update primal-dual interior-point method based on a finite barrier. SIAM J. Optim. 2003, 13: 766–782.

    Article  Google Scholar 

  9. Ghami ME, Steihaug T:Kernel-function based primal-dual algorithms for P (κ) linear complementarity problems. RAIRO. Rech. Opér. 2010, 44: 185–205.

    Article  Google Scholar 

  10. Lesaja G, Roos C:Unified analysis of kernel-based interior-point methods for P (κ)-linear complementarity problems. SIAM J. Optim. 2010, 20: 3014–3039. 10.1137/090766735

    Article  MathSciNet  Google Scholar 

  11. Ghami ME, Guennoun ZA, Bouali S, Steihaug T: Interior-point methods for linear optimization based on a kernel function with a trigonometric barrier term. J. Comput. Appl. Math. 2012, 236: 3613–3623. 10.1016/j.cam.2011.05.036

    Article  MathSciNet  Google Scholar 

  12. Cho GM, Cho YY, Lee YH: A primal-dual interior-point algorithm based on a new kernel function. ANZIAM J. 2010, 51: 476–491. 10.1017/S1446181110000908

    Article  MathSciNet  Google Scholar 

  13. Kojima M, Megiddo N, Noma T, Yoshise A Lecture Notes in Computer Science 538. In A Unified Approach to Interior Point Algorithms for Linear Complementarity Problems. Springer, Berlin; 1991.

    Chapter  Google Scholar 

  14. Tütüncü RH, Todd MJ: Reducing horizontal linear complementarity problems. Linear Algebra Appl. 1995, 223/224: 717–729.

    Article  Google Scholar 

  15. Jansen B, Roos K, Terlaky T, Yoshise A: Polynomiality of primal-dual affine scaling algorithms for nonlinear complementarity problems. Math. Program. 1997, 78: 315–345.

    MathSciNet  Google Scholar 

  16. Xiu N, Zhang J: A smoothing Gauss-Newton method for the generalized HLCP. J. Comput. Appl. Math. 2001, 129: 195–208. 10.1016/S0377-0427(00)00550-1

    Article  MathSciNet  Google Scholar 

  17. Peng J, Roos C, Terlaky T: Self-Regularity: A New Paradigm for Primal-Dual Interior-Point Algorithms. Princeton University Press, Princeton; 2002.

    Google Scholar 

  18. Roos C, Terlaky T, Vial JP: Theory and Algorithms for Linear Optimization, an Interior Approach. Wiley, Chichester; 1997.

    Google Scholar 

Download references

Acknowledgements

This research of the first author was supported by the Basic Science Research Program through NRF funded by the Ministry of Education, Science, and Technology (No. 2012005767) and by the Research Fund Program of Research Institute for Basic Science, Pusan National University, Korea, 2012, Project No. RIBS-PNU-2012-102.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gyeong-Mi Cho.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors have equally contributed in designing a new algorithm and obtaining complexity results. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Lee, YH., Cho, YY. & Cho, GM. Kernel function based interior-point methods for horizontal linear complementarity problems. J Inequal Appl 2013, 215 (2013). https://doi.org/10.1186/1029-242X-2013-215

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2013-215

Keywords