Skip to content

Advertisement

Open Access

New complexity analysis of interior-point methods for the Cartesian P ( κ ) -SCLCP

Journal of Inequalities and Applications20132013:285

https://doi.org/10.1186/1029-242X-2013-285

Received: 21 March 2013

Accepted: 21 May 2013

Published: 6 June 2013

Abstract

In this paper, we give a unified analysis for both large- and small-update interior-point methods for the Cartesian P ( κ ) -linear complementarity problem over symmetric cones based on a finite barrier. The proposed finite barrier is used both for determining the search directions and for measuring the distance between the given iterate and the μ-center for the algorithm. The symmetry of the resulting search directions is forced by using the Nesterov-Todd scaling scheme. By means of Euclidean Jordan algebras, together with the feature of the finite kernel function, we derive the iteration bounds that match the currently best known iteration bounds for large- and small-update methods. Furthermore, our algorithm and its polynomial iteration complexity analysis provide a unified treatment for a class of primal-dual interior-point methods and their complexity analysis.

MSC:90C33, 90C51.

Keywords

interior-point methodslinear complementarity problemCartesian P ( κ ) -propertyEuclidean Jordan algebrapolynomial complexity

1 Introduction

Let ( V , ) be an n-dimensional Euclidean Jordan algebra with rank r equipped with the standard inner product x , s = tr ( x s ) . Let K be the corresponding symmetric cone. For a linear transformation A : V V and a q V , the linear complementarity problem over symmetric cones, denoted by SCLCP, is to find x , s V such that
x K , s = A ( x ) + q K and x s = 0 .
(1)

Note that x , s = 0 x s = 0 (Lemma 2.2 in [1]).

The SCLCP is a wide class of problems that contains linear complementarity problem (LCP), second-order cone linear complementarity problem (SOCLCP) and semidefinite linear complementarity problem (SDLCP) as special cases. For an overview of these and related results, we refer to the survey paper [2] and references within.

There are many solution approaches for SCLCP. Among them, the interior-point methods (IPMs) gain much more attention. Faybusovich [3] made the first attempt to generalize IPMs to symmetric optimization (SO) and SCLCP using the ‘machinery’ of Euclidean Jordan algebras. Potra [4] proposed an infeasible corrector-predictor IPM for the monotone SCLCP. Yoshise [5] proposed the homogeneous model for the monotone nonlinear complementarity problems (NCP) over symmetric cones (SCNCP) and analyzed IPM to solve it.

Let V be a Cartesian product of a finite number of simple Euclidean Jordan algebras ( V j , ) with dimensions n j and ranks r j for j = 1 , , N , that is, V = V 1 × × V N with its cone of squares K = K 1 × × K N , where K j are the corresponding cones of squares of V j for j = 1 , , N . The dimension of V is n = j = 1 N n j and the rank is r = j = 1 N r j . Recall that a Euclidean Jordan algebra is said to be simple if it cannot be represented as the orthogonal direct sum of two Euclidean Jordan algebras.

We call SCLCP the Cartesian P ( κ ) -SCLCP if the linear transformation A has the Cartesian P ( κ ) -property for some nonnegative constant κ, i.e.,
( 1 + 4 κ ) j I + ( x ) x ( j ) , [ A ( x ) ] ( j ) + j I ( x ) x ( j ) , [ A ( x ) ] ( j ) 0 , x V ,
(2)

where I + ( x ) = { j : x ( j ) , [ A ( x ) ] ( j ) > 0 } and I ( x ) = { j : x ( j ) , [ A ( x ) ] ( j ) < 0 } are two index sets. It is closely related to the Cartesian P 0 - and P-properties which were first introduced by Chen and Qi [6] over the space of symmetric matrices, and later extended by Pan and Chen [7] and Luo and Xiu [8] to the space of second-order cones and the general Euclidean Jordan algebras, respectively.

The Cartesian P ( κ ) -SCLCP is indeed the generalization of P ( κ ) -LCP, which was first introduced by Kojima et al. [9]. They established the existence of the central path and designed and analyzed IPMs for solving P ( κ ) -LCP. The theoretical importance of this class of LCPs lays in the fact that this is the largest class for which polynomial convergence of IPMs can be proved without additional conditions (such as boundedness of the level sets).

Luo and Xiu [8] were the first to establish a theoretical framework of path-following interior-point algorithms for the Cartesian P ( κ ) -SCLCP and to prove the global convergence and the iteration complexities of the proposed algorithms. Wang and Bai [10] analyzed a class of IPMs for the Cartesian P ( κ ) -SCLCP based on a parametric kernel function different from the logarithmic kernel function. Lesaja et al. [11] gave a unified analysis of kernel-based IPMs for the Cartesian P ( κ ) -SCLCP and derived the currently best known iteration bounds for large- and small-update methods for some special eligible kernel functions. Wang and Lesaja [12] generalize Roos’s full-Newton step feasible IPM for LO [13] and Gu et al. extension to SO [14], to the Cartesian P ( κ ) -SCLCP. Liu et al. [15] proposed smoothing Newton methods for the Cartesian P 0 - and P-SCLCPs. Huang and Lu [16] presented a globally convergent smoothing method with a linear rate of convergence for the Cartesian P ( κ ) -SCLCP.

Bai et al. [17] introduced a finite kernel function as follows:
ψ ( t ) = t 2 1 2 + e σ ( 1 t ) 1 σ , σ 1 ,
(3)
which is not a kernel function in the usual sense (see, e.g., [18, 19]). It has a finite value at the boundary of the feasible region, i.e.,
lim t 0 ψ ( t ) = ψ ( 0 ) = 1 2 + e σ 1 σ < .
(4)
However, the iteration bound of a large-update method based on this kernel function is shown to be O ( n log n log n ε ) . Recently, Ghami et al. [20] studied the generalization of the finite kernel function ψ ( t ) as follows:
ψ p , σ ( t ) = t p + 1 1 p + 1 + e σ ( 1 t ) 1 σ , p [ 0 , 1 ] , σ 1 .
(5)

This parametric kernel function also has finite value at the boundary of the feasible region and its growth terms are between linear and quadratic. They proposed a class of primal-dual interior-point algorithms for LO and the extension to SDO [21] based on the parametric kernel function ψ p , σ ( t ) , respectively. Meanwhile, the results for LO in [17, 20] were extended to P ( κ ) -LCP by Wang and Bai in [22], again matching the best known iteration bounds for LO with the addition 1 + 2 κ . An interesting question here is whether we can directly extend the interior-point algorithms for LO in [17] to the Cartesian P ( κ ) -SCLCP. As we will see later, LO cannot be trivially generalized to the Cartesian P ( κ ) -SCLCP context. The analysis of the algorithm proposed in this paper is more complicated than in the LO case mainly due to the fact that the search directions are no longer orthogonal.

In this paper, we consider a generalization of kernel-based IPMs discussed in the paper [17] to the Cartesian P ( κ ) -SCLCP. The paper also extends the results of the paper [22] where we consider the same type of IPMs for P ( κ ) -LCP, however, only over the nonnegative orthant. Our goal is to provide a unified analysis for both large- and small-update IPMs for the Cartesian P ( κ ) -SCLCP based on the finite barrier. Although the proposed algorithm is an exact extension of the algorithms for LO and P ( κ ) -LCP, the Cartesian P ( κ ) -property makes the analysis of the method far more complicated. Furthermore, we loose the orthogonality of the scaled search directions in the Cartesian P ( κ ) -SCLCP case. This also yields many difficulties in the analysis of the algorithm for the Cartesian P ( κ ) -SCLCP. However, we manage to prove the same good characteristics as in the LO case. The obtained complexity results match the best known iteration bounds known for large- and small-update methods, namely O ( ( 1 + 2 κ ) r log r log r ε ) and O ( ( 1 + 2 κ ) r log r ε ) , respectively. The order of the iteration bounds almost coincides with the bounds derived for LO in [17], except that the iteration bounds in the Cartesian P ( κ ) -SCLCP case are multiplied by the factor ( 1 + 2 κ ) .

The paper is organized as follows. In Section 2, we briefly describe some concepts, properties, and results from Euclidean Jordan algebras. In Section 3, we provide and develop some useful properties of the finite kernel function ψ ( t ) and the corresponding barrier function Ψ ( v ) . In Section 4, we mainly study primal-dual IPMs for the Cartesian P ( κ ) -SCLCP based on the finite kernel function. The analysis and complexity bounds of the algorithm are presented in Sections 5 and 6, respectively. Finally, some conclusions and remarks follow in Section 7.

Notations used throughout the paper are as follows. R n , R + n , and R + + n denote the set of all vectors (with n components), the set of nonnegative vectors, and the set of positive vectors, respectively. The largest eigenvalue of x and the smallest eigenvalue of x are defined by λ max ( x ) and λ min ( x ) . The Löwner partial ordering ‘ K ’ of V defined by a symmetric cone K is defined by x K s if x s K . The interior of K is denoted as int K , and we write x K s if x s int K . Finally, if g ( x ) 0 is a real-valued function of a real nonnegative variable, the notation g ( x ) = O ( x ) means that g ( x ) c ¯ x for some positive constant c ¯ , and g ( x ) = Θ ( x ) that c 1 x g ( x ) c 2 x for two positive constants c 1 and c 2 .

2 Preliminaries

In what follows, we assume that the reader is familiar with the basic concepts of Euclidean Jordan algebras and symmetric cones. The detailed information can be found in the monograph of Faraut and Korányi [23] and in [1, 14, 2429] as it relates to optimization.

The bilinear form on V is defined as
x s : = ( x ( 1 ) s ( 1 ) , , x ( N ) s ( N ) ) T ,
(6)
where x = ( x ( 1 ) , , x ( N ) ) T and s = ( s ( 1 ) , , s ( N ) ) T in V with x ( j ) , s ( j ) V j , j = 1 , , N . If e ( j ) V j is the identity element in the Euclidean Jordan algebra V j , then the vector
e = ( e ( 1 ) , , e ( N ) ) T
(7)

is the identity element in V .

For each x ( j ) V j with j = 1 , , N , the Lyapunov transformation and the quadratic representation of V j are given by
L ( x ( j ) ) y ( j ) = x ( j ) y ( j ) and P ( x ( j ) ) : = 2 L ( x ( j ) ) 2 L ( ( x ( j ) ) 2 ) ,
(8)
where L ( x ( j ) ) 2 = L ( x ( j ) ) L ( x ( j ) ) . They can be adjusted to the Cartesian product structure V as follows:
L ( x ) = diag ( L ( x ( 1 ) ) , , L ( x ( N ) ) ) and P ( x ) = diag ( P ( x ( 1 ) ) , , P ( x ( N ) ) ) .
(9)
The spectral decomposition of x ( j ) V j with respect to the Jordan frame { c 1 ( j ) , , c r j ( j ) } is given by
x ( j ) = i = 1 r j λ i ( x ( j ) ) c i ( j ) , j = 1 , , N ,
(10)
where λ 1 ( x ( j ) ) , , λ r j ( x ( j ) ) are the corresponding eigenvalues. The spectral decomposition of x V can be defined straightforwardly by using the spectral decomposition of components x ( j ) V j as follows:
x = ( i = 1 r 1 λ i ( x ( 1 ) ) c i ( 1 ) , , i = 1 r N λ i ( x ( N ) ) c i ( N ) ) T .
(11)

It enables us to extend the definition of any real-valued, continuous univariate function to elements of a Euclidean Jordan algebra, using the eigenvalues. In particular this holds for the finite kernel function.

Let x V with the spectral decomposition as defined (11). The vector-valued function ψ ( x ) is defined by
ψ ( x ) = ( ψ ( x ( 1 ) ) , , ψ ( x ( N ) ) ) T ,
(12)
where
ψ ( x ( j ) ) = ψ ( λ 1 ( x ( j ) ) ) c 1 ( j ) + + ψ ( λ r j ( x ( j ) ) ) c r j ( j ) , j = 1 , , N .
(13)
Furthermore, if ψ ( t ) is differentiable, the derivative ψ ( t ) exists, and we also have a vector-valued function ψ ( x ) , namely
ψ ( x ) = ( ψ ( x ( 1 ) ) , , ψ ( x ( N ) ) ) T ,
(14)
where
ψ ( x ( j ) ) = ψ ( λ 1 ( x ( j ) ) ) c 1 ( j ) + + ψ ( λ r j ( x ( j ) ) ) c r j ( j ) , j = 1 , , N .
(15)

It should be noted that ψ ( x ) , which does not mean that the derivative of the vector-valued function ψ ( x ) defined by (12) is just a vector-valued function induced by the derivative ψ ( t ) of the function ψ ( t ) .

The Peirce decomposition of x ( j ) V j with respect to the Jordan frame { c 1 ( j ) , , c r j ( j ) } is given by
x ( j ) = i = 1 r j x i ( j ) c i ( j ) + i < m j x i m j ( j ) , j = 1 , , N ,
(16)
with x i ( j ) R , i = 1 , , r j and x i m j ( j ) V i m j ( j ) , 1 i < m j r j . The V i m j ( j ) for 1 i < m j r j are the Peirce subspaces of V j induced by the Jordan frame c 1 ( j ) , , c r j ( j ) . The Peirce decomposition of x V can be defined straightforwardly by using the Peirce decomposition of components x ( j ) V j as follows:
x = ( i = 1 r 1 x i ( 1 ) c i ( 1 ) + i < m 1 x i m 1 ( 1 ) , , i = 1 r N x i ( N ) c i ( N ) + i < m N x i m N ( N ) ) T .
(17)
The canonical inner product is defined as
x , s = j = 1 N x ( j ) , s ( j ) = j = 1 N tr ( x ( j ) s ( j ) ) .
(18)
We recall the following definitions:
tr ( x ( j ) ) = i = 1 r j λ i ( x ( j ) ) , det ( x ( j ) ) = i = 1 r j λ i ( x ( j ) ) and x ( j ) F = i = 1 r j λ i 2 ( x ( j ) ) , j = 1 , , N .
(19)
Then, in V we have
tr ( x ) = j = 1 N tr ( x ( j ) ) , det ( x ) = j = 1 N det ( x ( j ) ) and x F = j = 1 N x ( j ) F 2 .
(20)
Furthermore, we define
λ max ( v ) = max { λ i ( x ( j ) ) : j = 1 , , N , 1 i r j }
(21)
and
λ min ( v ) = min { λ i ( x ( j ) ) : j = 1 , , N , 1 i r j } .
(22)
It follows from (21), (22), and (20) that
| λ max ( x ) | x F and | λ min ( x ) | x F .
(23)

Furthermore, we have the following important result.

Lemma 2.1 (Lemma 14 in [28])

Let x , s V . Then
λ min ( x ) s F λ min ( x + s ) λ max ( x + s ) λ max ( x ) + s F .
Before ending this section, we need to consider the separable spectral functions induced by the univariate functions. Let f : D R be a univariate function on the open set D R that is differentiable or even continuously differentiable if necessary. Let x ( j ) = i = 1 r j λ i ( x ( j ) ) c i ( j ) be the spectral decomposition of x ( j ) V j with respect to the Jordan frame c 1 ( j ) , , c r j ( j ) for each j, j = 1 , , N . Then we define the real -valued separable spectral function F ( x ( j ) ) : V j R and the vector-valued separable spectral function G : V j V j by
F ( x ( j ) ) : = i = 1 r j f ( λ i ( x ( j ) ) ) and G ( x ) : = i = 1 r j f ( λ i ( x ( j ) ) ) c i ( j ) , j = 1 , , N ,
(24)
respectively. The first derivative D x F ( x ( j ) ) of the function F ( x ( j ) ) and the first derivative D x G ( x ( j ) ) of the function G ( x ( j ) ) are given by
D x F ( x ( j ) ) = i = 1 r j f ( λ i ( j ) ) c i ( j )
(25)
and
D x G ( x ( j ) ) = i = 1 r j f ( λ i ( j ) ) V i i ( j ) + i < m j λ i ( j ) = λ m j ( j ) f ( λ i ( j ) ) V i m j ( j ) + i < m j λ i ( j ) λ m j ( j ) f ( λ i ( j ) ) f ( λ m j ( j ) ) λ i ( j ) λ m j ( j ) V i m j ( j ) ,
(26)

respectively, where λ i ( j ) = λ i ( x ( j ) ) , λ m j ( j ) = λ m j ( x ( j ) ) , and V i m j ( j ) , 1 i m j r , are orthogonal projection operators that appear in the Peirce decomposition of V j with respect to the Jordan frame c 1 ( j ) , , c r j ( j ) .

The above results, as well as a more general treatment of spectral functions, their derivatives and various properties can be found in [24, 27].

Now, the separable spectral functions can be adjusted to the Cartesian product structure V as follows:
F ( x ) = j = 1 N F ( x ( j ) ) and G ( x ) = ( G ( x ( 1 ) ) , , G ( x ( N ) ) ) T .
(27)
It follows directly from (25) and (26) that
D x F ( x ) = ( D x F ( x ( 1 ) ) , , D x F ( x ( N ) ) ) T and D x G ( x ) = ( D x G ( x ( 1 ) ) , , D x G ( x ( N ) ) ) T .
(28)

3 Properties of the finite kernel (barrier) function

In this section, we provide and develop some useful properties of the finite kernel function and the corresponding barrier function that are needed in the analysis of the algorithm. For ease of reference, we give the first three derivatives of ψ ( t ) with respect to t as follows:
ψ ( t ) = t e σ ( 1 t ) , ψ ( t ) = 1 + σ e σ ( 1 t ) and ψ ( t ) = σ 2 e σ ( 1 t ) .
(29)
We can conclude that
ψ ( 1 ) = ψ ( 1 ) = 0 , ψ ( t ) > 0 , t > 0 , ψ ( t ) < 0 , t > 0 and lim t ψ ( t ) = + .
(30)

It follows from (30) that ψ ( t ) is strictly convex and ψ ( t ) is monotonically decreasing in t ( 0 , + ) .

The property described below in Lemma 3.1 is exponential convexity, which has been proven to be very useful in the analysis of kernel-based primal-dual IPMs (see, e.g., [18, 19]).

Lemma 3.1 (Lemma 2.4 in [17])

If t 1 1 σ and t 2 1 σ , then
ψ ( t 1 t 2 ) 1 2 ( ψ ( t 1 ) + ψ ( t 2 ) ) .

Note that ψ ( t ) is exponentially convex, whenever t 1 σ . The following lemma makes clear that when v belongs to the level set { v : Ψ ( v ) L } , for some given L 8 , the exponential convexity is guaranteed and it is proved that the value of σ is large enough.

Lemma 3.2 (Lemma 2.5 in [17])

Let L 8 and Ψ ( v ) L . If σ 1 + 2 log ( 1 + L ) , then λ min ( v ) 3 2 σ .

Corresponding to the finite kernel function ψ ( t ) defined by (3), we define the barrier function on int K as follows:
Ψ ( v ) : = Ψ ( x , s ; μ ) : = tr ( ψ ( v ) ) .
(31)
It follows immediately from (12) and (20) that
Ψ ( v ) = tr ( ψ ( v ) ) = j = 1 N tr ( ψ ( v ( j ) ) ) = j = 1 N i = 1 r j ψ ( λ i ( v ( j ) ) ) .
(32)
According to the properties of the finite kernel function ψ ( t ) , we can conclude that Ψ ( v ) is nonnegative and strictly convex with respect to v K 0 and vanishes at its global minimal point v = e , i.e.,
Ψ ( v ) = 0 ψ ( v ) = 0 ψ ( v ) = 0 v = e .
Furthermore, we have, by (28),
D v Ψ ( v ) = ( i = 1 r 1 ψ ( λ i ( v ( 1 ) ) ) c i ( 1 ) , , i = 1 r N ψ ( λ i ( v ( N ) ) ) c i ( N ) ) T = ψ ( v ) .
(33)

This means that the derivative of the barrier function Ψ ( v ) in essence coincides with the vector-valued function ψ ( v ) defined by (14) and (15).

As the consequence of Lemma 3.1, we have the following theorem, which is indeed a slight modification of Theorem 4.3.2 in [29]. Thus, we omit its proof.

Theorem 3.3 Let x , s int K . If λ min ( x ) 1 σ and λ min ( s ) 1 σ , then
Ψ ( ( P ( x ) 1 / 2 s ) 1 / 2 ) 1 2 ( Ψ ( x ) + Ψ ( s ) ) .
Lemma 3.4 If t 1 , then
ψ ( t ) 1 + σ 2 ( t 1 ) 2 .

Proof From Taylor’s theorem and the fact that ψ ( 1 ) = 1 + σ , the inequality is straightforward. □

Lemma 3.5 If t 1 , then
t ψ ( t ) ψ ( t ) .
Proof Defining f ( t ) : = t ψ ( t ) ψ ( t ) , we have f ( 1 ) = 0 and
f ( t ) = t ψ ( t ) 0 .

This implies the desired result. □

The following lemma can be directly obtained from Lemma 2.5 in [22], which provides the lower and upper bounds of the inverse function of the finite kernel function ψ ( t ) for t 1 .

Lemma 3.6 Let ϱ : [ 0 , ) [ 1 , ) be the inverse function of the finite kernel function ψ ( t ) for t 1 . If σ 1 , then
1 + 2 s ϱ ( s ) ( 2 s + 2 + σ σ ) 1 2 .
(34)
If σ 2 , then
ϱ ( s ) 1 + s ( 2 s + 2 + σ σ ) 1 4 .
(35)
For the analysis of the algorithm, we define the norm-based proximity measure δ ( v ) as follows:
δ ( v ) : = 1 2 ψ ( v ) F .
(36)
It follows from (14) and (20) that
δ ( v ) = 1 2 j = 1 N i = 1 r j ψ ( λ i ( v ( j ) ) ) 2 .
(37)

We can conclude that δ ( v ) 0 and δ ( v ) = 0 if and only if Ψ ( v ) = 0 .

Clearly, δ ( v ) and Ψ ( v ) depend only on the eigenvalues λ i ( v ( j ) ) of the symmetric cone v ( j ) for each j, j = 1 , , N . The following theorem gives a lower bound on δ ( v ) in terms of Ψ ( v ) , which is precisely the same as its LO counterpart δ ( v ) (cf. Theorem 4.8 in [18]).

Theorem 3.7 If v int K , then
δ ( v ) 1 2 ψ ( ϱ ( Ψ ( v ) ) ) .
Corollary 3.8 If v int K and Ψ ( v ) 1 , then
δ ( v ) 1 6 Ψ ( v ) .
Proof From (34) and the fact that Ψ ( v ) 1 and σ 1 , we have
ϱ ( Ψ ( v ) ) ( 2 Ψ ( v ) + 2 + σ σ ) 1 2 ( 5 Ψ ( v ) ) 1 2 3 ( Ψ ( v ) ) 1 2 .
Thus, we have, by Theorem 3.7 and Lemma 3.5,
δ ( v ) 1 2 ψ ( ϱ ( Ψ ( v ) ) ) ψ ( ϱ ( Ψ ( v ) ) ) 2 ϱ ( Ψ ( v ) ) = Ψ ( v ) 2 ϱ ( Ψ ( v ) ) Ψ ( v ) 6 ( Ψ ( v ) ) 1 2 = 1 6 Ψ ( v ) .

This completes the proof of the corollary. □

In what follows, we consider the derivatives of the function Ψ ( x ( t ) ) with respect to t, where x ( t ) = x 0 + t u int K with t R and u V . For more details, we refer to [29].

It follows from (11) and (17) that the spectral decomposition of x ( t ) with respect to the Jordan frame { c 1 ( 1 ) , , c r 1 ( 1 ) , , c 1 ( N ) , , c r N ( N ) } can be defined by
x ( t ) = ( i = 1 r 1 λ i ( x ( t ) ( 1 ) ) c i ( 1 ) , , i = 1 r N λ i ( x ( t ) ( N ) ) c i ( N ) ) T ,
(38)
and the Pierce decomposition of u can be defined by
u = ( i = 1 r 1 u i ( 1 ) c i ( 1 ) + i < m 1 u i m 1 ( 1 ) , , i = 1 r N u i ( N ) c i ( N ) + i < m N u i m N ( N ) ) T .
(39)
From (28), after some elementary reductions, we can derive the first two derivatives of the general function Ψ ( x ( t ) ) with respect to t as follows:
D t Ψ ( x ( t ) ) = j = 1 N tr ( i = 1 r j ψ ( λ i ( x ( t ) ( j ) ) ) c i ( j ) u ( j ) )
(40)
and
D t 2 Ψ ( x ( t ) ) = j = 1 N ( i = 1 r j ψ ( λ i ( j ) ) ( u i ( j ) ) 2 + i < m j λ i ( j ) = λ m j ( j ) ψ ( λ i ( j ) ) tr ( ( u i m j ( j ) ) 2 ) + i < m j λ i ( j ) λ m j ( j ) ψ ( λ i ( j ) ) ψ ( λ m j ( j ) ) λ i ( j ) λ m j ( j ) tr ( ( u i m j ( j ) ) 2 ) ) ,
(41)

where λ i ( j ) = λ i ( x ( t ) ( j ) ) and λ m j ( j ) = λ m j ( x ( t ) ( j ) ) .

Note that ψ ( t ) is monotonically decreasing in t ( 0 , + ) . Under the assumption that i < m j implies λ i ( x ( t ) ) λ m j ( x ( t ) ) , we can conclude that
D t 2 Ψ ( x ( t ) ) j = 1 N ( i = 1 r j ψ ( λ i ( j ) ) ( u i ( j ) ) 2 + i < m j ψ ( λ m j ( j ) ) tr ( ( u i m j ( j ) ) 2 ) ) ,
(42)

which bounds the second-order derivative of Ψ ( x ( t ) ) with respect to t.

4 Interior-point algorithm for the Cartesian P ( κ ) -SCLCP

In this section, we first introduce the central path for the Cartesian P ( κ ) -SCLCP. Next, we mainly derive the new search directions induced by the finite kernel function ψ ( t ) . Finally, we present the generic polynomial interior-point algorithm for the Cartesian P ( κ ) -SCLCP.

4.1 The central path for the Cartesian P ( κ ) -SCLCP

Throughout the paper, we assume that the Cartesian P ( κ ) -SCLCP satisfies the interior-point condition (IPC), i.e., there exists ( x 0 K 0 , s 0 K 0 ) such that s 0 = A ( x 0 ) + q . For this and other properties of the Cartesian P ( κ ) -SCLCP, we refer to [8]. Under the IPC holds, by relaxing the complementarity slackness x s = 0 with x s = μ e , we obtain the following system:
( A ( x ) s x s ) = ( q μ e ) , x , s K 0 ,
(43)

where μ > 0 is a parameter. The parameterized system (43) has a unique solution for each μ > 0 . This solution is denoted as ( x ( μ ) , s ( μ ) ) and we call ( x ( μ ) , s ( μ ) ) the μ-center of the Cartesian P ( κ ) -SCLCP. The set of μ-centers (with μ running through all positive real numbers) gives a homotopy path, which is called the central path of the Cartesian P ( κ ) -SCLCP. If μ 0 , then the limit of the central path exists and since the limit points satisfy the complementarity condition x s = 0 , the limit yields a solution for the Cartesian P ( κ ) -SCLCP (see, e.g., [8]).

4.2 The new search directions for the Cartesian P ( κ ) -SCLCP

To obtain the search directions for the Cartesian P ( κ ) -SCLCP, the usual approach is to use Newton’s method and to linearize the system (43). In what follows, we briefly outline the details.

For any strictly feasible x K 0 and s K 0 , we want to find displacements Δx and Δs such that
( A ( x + Δ x ) ( s + Δ s ) ( x + Δ x ) ( s + Δ s ) ) = ( q μ e ) .
(44)
Neglecting the term Δ x Δ s on the left-hand side expression of the second equation, we obtain the following Newton system for the search directions Δx and Δs:
( A ( Δ x ) Δ s s Δ x + x Δ s ) = ( 0 μ e x s ) .
(45)

Due to the fact that x and s do not operator commute in general, i.e., L ( x ) L ( s ) L ( s ) L ( x ) , this system does not always have a unique solution. It is well known that this difficulty can be solved by applying a scaling scheme. This goes as follows.

Lemma 4.1 (Lemma 28 in [28])

Let u int K . Then
x s = μ e P ( u ) x P ( u 1 ) s = μ e .
Now we replace the second equation of the system (44) by
P ( u ) ( x + Δ x ) P ( u 1 ) ( s + Δ s ) = μ e .
(46)
Applying Newton’s method again, and neglecting the term P ( u ) Δ x P ( u 1 ) Δ s , we get
( A ( Δ x ) Δ s P ( u 1 ) ( s ) P ( u ) Δ x + P ( u ) ( x ) P ( u 1 ) Δ s ) = ( 0 μ e P ( u ) ( x ) P ( u 1 ) ( s ) ) .
(47)

By choosing u appropriately, this system can be used to define the commutative class of search directions (see, e.g., [28]). In the literature the following three choices are well known: u = s 1 / 2 , u = x 1 / 2 , and u = w 1 / 2 , where w is the NT-scaling point of x and s. The first two choices lead to the so-called xs-direction and sx-direction, respectively. In this paper, we consider the third choice that is called NT-scaling scheme and the resulting direction is called NT search direction. This scaling scheme was first proposed by Nesterov and Todd for self-scaled cones [30, 31] and then adapted by Faybusovich [1, 26] for symmetric cones.

Lemma 4.2 (Lemma 3.2 in [26])

Let x , s int K . Then there exists a unique scaling point w int K such that
x = P ( w ) s .
Moreover,
w = P ( x ) 1 2 ( P ( x 1 2 ) s ) 1 2 [ = P ( s 1 2 ) ( P ( s 1 2 ) x ) 1 2 ] .
As a consequence of the above lemma, there exists v ˜ int K such that
v ˜ = P ( w ) 1 2 x = P ( w ) 1 2 s .
(48)
Note that P ( w ) 1 2 and its inverse P ( w ) 1 2 are automorphisms of K (see, e.g., [14, 29]). This leads to the definition of the following variance vector:
v : = 1 μ P ( w ) 1 2 x [ = 1 μ P ( w ) 1 2 s ] .
(49)
Furthermore, we define
A ¯ : = P ( w ) 1 2 A P ( w ) 1 2 , d x : = P ( w ) 1 2 Δ x μ and d s : = P ( w ) 1 2 Δ s μ .
(50)

The transformation A ¯ also has the Cartesian P ( κ ) -property (cf. Proposition 3.4 in [8]).

Using (49) and (50), after some elementary reductions, we obtain the scaled Newton system as follows:
( A ¯ ( d x ) d s d x + d s ) = ( 0 v 1 v ) .
(51)

Since the linear transformation A ¯ has the Cartesian P ( κ ) -property, the system (51) has a unique solution [8].

So far we have described the scheme that defines the classical NT-direction for the Cartesian P ( κ ) -SCLCP. The approach in this paper differs only in one detail. Given the finite kernel function ψ ( t ) defined by (3) and the associated vector-valued function ψ ( v ) defined by (14) and (15), we replace the right-hand side of the second equation in (51) by ψ ( v ) , i.e., minus the derivative of the barrier function Ψ ( v ) . Thus we consider the following system:
( A ¯ ( d x ) d s d x + d s ) = ( 0 ψ ( v ) ) .
(52)

Since the system (52) has the same matrix of coefficients as the system (51), also the system (52) has a unique solution.a

The new search directions d x and d s are computed by solving (52), thus Δx and Δs are obtained from (50). If ( x , s ) ( x ( μ ) , s ( μ ) ) , then ( Δ x , Δ s ) is nonzero. By taking a default step size α along the search directions, we get the new iteration point ( x + , s + ) according to
x + : = x + α Δ x and s + : = s + α Δ s .
(53)
Furthermore, we can easily verify that
x s = μ e v = e ψ ( v ) = 0 ψ ( v ) = 0 Ψ ( v ) = 0 .
(54)

Hence, the value of Ψ ( v ) can be considered as a measure for the distance between the given iterate ( x , s ) and the μ-center ( x ( μ ) , s ( μ ) ) .

4.3 The generic interior-point algorithm for the Cartesian P ( κ ) -SCLCP

Define the τ-neighborhood of the central path as follows:
N ( τ ) : = { ( x , s ) int K × int K : s = A ( x ) + q , Ψ ( v ) τ } .

It is clear from the above description that the closeness of ( x , s ) to ( x ( μ ) , s ( μ ) ) is measured by the value of Ψ ( v ) , with τ > 0 as a threshold value. If Ψ ( v ) τ , then we start a new outer iteration by performing a μ-update, i.e., μ + : = ( 1 θ ) μ ; otherwise, we enter an inner iteration by computing the search directions using (52) and (50) at the current iterates with respect to the current value of μ and apply (53) to get new iterates. If necessary, we repeat the procedure until we find iterates that are in the τ-neighborhood of the central path. Then μ is again reduced by the factor 1 θ with 0 < θ < 1 , and we apply inner iteration(s) targeting at the new μ-centers, and so on. This process is repeated until μ is small enough, say until r μ < ε . At this stage, we have found a ε-approximate solution of the Cartesian P ( κ ) -SCLCP.

The generic interior-point algorithm for the Cartesian P ( κ ) -SCLCP is now presented as follows.

5 Analysis of the algorithm

In this section, we first discuss the growth behavior of the barrier function during an outer iteration. Next, we choose the default step size and obtain an upper bound for the decrease of the barrier function during an inner iteration. Finally, we show that the default step size yields sufficient decrease of the barrier function value during each inner iteration.

5.1 Growth behavior of the barrier function during an outer iteration

It should be mentioned that during the course of the algorithm the largest values of Ψ ( v ) occur just after the update of μ. So, next we derive an estimate for the effect of a μ-update on the value of Ψ ( v ) .

It follows from (32) that
Ψ ( β v ) = j = 1 N i = 1 r j ψ ( β λ i ( v ( j ) ) ) ,

which means that Ψ ( β v ) depends only on the eigenvalues λ i ( v ( j ) ) of the symmetric cone v ( j ) for each j, j = 1 , , N . The growth behavior of the proximity Ψ ( v ) is precisely the same as its LO counterpart Ψ ( β v ) (cf. Theorem 3.2 in [18]).

Theorem 5.1 If v K + and β 1 , then
Ψ ( β v ) r ψ ( β ϱ ( Ψ ( v ) r ) ) .
Corollary 5.2 Let 0 < θ < 1 and v + = v 1 θ . If Ψ ( v ) τ , then
Ψ ( v + ) r ψ ( ϱ ( τ r ) 1 θ ) .

Proof With β = 1 1 θ > 1 and Ψ ( v ) τ , the corollary follows immediately from Theorem 5.1. □

5.2 Choice of the default step size

From (53) and (50), after some elementary reductions, we have
x + = μ P ( w ) 1 2 ( v + α d x ) and s + = μ P ( w ) 1 2 ( v + α d s ) .
(55)
Thus,
v + : = 1 μ P ( w + ) 1 2 x + = 1 μ P ( w + ) 1 2 s + ,
or equivalently,
v + = P ( w + ) 1 2 P ( w ) 1 2 ( v + α d x ) = P ( w + ) 1 2 P ( w ) 1 2 ( v + α d s ) ,
where, according to Lemma 4.2,
w + : = P ( x + ) 1 2 ( ( P ( x + ) 1 2 s + ) 1 2 ) .
To calculate the decrease of the barrier function Ψ ( v ) during an inner iteration, it is standard to consider the decrease as a function of α defined by
f ( α ) : = Ψ ( v + ) Ψ ( v ) .
Our aim is to find an upper bound for f ( α ) by using the exponential convexity of ψ ( t ) , and according to Lemma 3.1. In order to do this, we assume for the moment that
λ min ( v ( j ) + α d x ( j ) ) 1 σ and λ min ( v ( j ) + α d s ( j ) ) 1 σ , j = 1 , , N .
(56)
However, working with f ( α ) may not be easy because in general f ( α ) is not convex. Thus, we are searching for the convex function f 1 ( α ) that is an upper bound of f ( α ) and whose derivatives are easier to calculate than those of f ( α ) . The key element in this process is replacing v + with a similar element that will allow the use of exponential-convexity of the barrier function. By Proposition 5.9.3 in [29], we have
v + ( P ( v + α d x ) 1 2 ( v + α d s ) ) 1 2
and therefore
Ψ ( v + ) = Ψ ( ( P ( v + α d x ) 1 2 ( v + α d s ) ) 1 2 ) .
Theorem 3.3 implies that
Ψ ( v + ) 1 2 ( Ψ ( v + α d x ) + Ψ ( v + α d s ) ) .
Hence, we have
f ( α ) f 1 ( α ) : = 1 2 ( Ψ ( v + α d x ) + Ψ ( v + α d s ) ) Ψ ( v ) ,

which means that f 1 ( α ) gives an upper bound for the decrease of the barrier function Ψ ( v ) . Furthermore, we can conclude that f ( 0 ) = f 1 ( 0 ) = 0 .

From (40), we have
f 1 ( α ) = 1 2 j = 1 N ( tr ( ψ ( v ( j ) + α d x ( j ) ) d x ( j ) ) + tr ( ψ ( v ( j ) + α d s ( j ) ) d s ( j ) ) ) = 1 2 ( tr ( ψ ( v + α d x ) d x ) + tr ( ψ ( v + α d s ) d s ) ) .
This gives, by (52) and (36),
f 1 ( 0 ) = 1 2 tr ( ψ ( v ) ( d x + d s ) ) = 1 2 tr ( ψ ( v ) ψ ( v ) ) = 1 2 ψ ( v ) F 2 = 2 δ ( v ) 2 < 0 .

Hence, we can conclude that f 1 ( α ) is monotonically decreasing in a neighborhood of α = 0 .

Furthermore, we have, by (41) and (42),
f 1 ( α ) 1 2 j = 1 N ( i = 1 r j ψ ( λ i ( η ( j ) ) ) ( d x i ( j ) ) 2 + i < m j ψ ( λ m j ( η ( j ) ) ) tr ( ( d x i m j ( j ) ) 2 ) ) + 1 2 j = 1 N ( i = 1 r j ψ ( λ i ( γ ( j ) ) ) ( d s i ( j ) ) 2 + i < m j ψ ( λ m j ( γ ( j ) ) ) tr ( ( d s i m j ( j ) ) 2 ) ) .
(57)

Contrary to the LO case, the vectors d x and d s are not necessarily orthogonal any more. However, the Cartesian P ( κ ) -property of SCLCP still allows us to find a good lower bound of the inner product d x , d s .

In order to facilitate discussion, we denote
δ : = δ ( v ) , δ + : = ν J + d x ( ν ) , d s ( ν ) and δ : = ν J d x ( ν ) , d s ( ν ) .
(58)
Lemma 5.3 One has
d x , d s 4 κ δ 2 .
Proof Since the linear transformation A has the Cartesian P ( κ ) -property, we have
( 1 + 4 κ ) j I + ( Δ x ) Δ x ( j ) , [ A ( Δ x ) ] ( j ) + j I ( Δ x ) Δ x ( j ) , [ A ( Δ x ) ] ( j ) 0 ,
(59)
where I + ( Δ x ) = { j : Δ x ( j ) , [ A ( Δ x ) ] ( j ) > 0 } and I ( Δ x ) = { j : Δ x ( j ) , [ A ( Δ x ) ] ( j ) < 0 } are two index sets. It follows from (50) and A ( Δ x ) = Δ s that Δ x , Δ s = μ d x , d s . This enables us to rewrite (59) as
( 1 + 4 κ ) j I + ( Δ x ) d x ( j ) , d s ( j ) + j I ( Δ x ) d x ( j ) , d s ( j ) 0 .
(60)
Hence, it follows that
d x , d s 4 κ j I + ( Δ x ) d x ( j ) , d s ( j ) .
(61)
Using the arithmetic-geometric mean inequality a , b 1 4 a + b , a + b , we have
j I + ( Δ x ) d x ( j ) , d s ( j ) 1 4 j I + ( Δ x ) d x ( j ) + d s ( j ) F 2 1 4 j = 1 N d x ( j ) + d s ( j ) F 2 = 1 4 d x + d s F 2 = δ 2 .
Substitution of this inequality into (61) yields
d x , d s 4 κ δ 2 .

This completes the proof of the lemma. □

The key steps in the analysis of the algorithm are based on the effort to find upper bounds on d x and d s in terms of the proximity measure δ. The following lemma yields their upper bounds.

Lemma 5.4 One has
d x F 2 1 + 2 κ δ and d s F 2 1 + 2 κ δ .
Proof From Lemma 5.3, we have
d x F 2 + d s F 2 = d x + d s 2 2 d x , d s 4 δ 2 + 8 κ δ 2 = 4 ( 1 + 2 κ ) δ 2 .
(62)

This implies the inequalities in the statement of the lemma. □

Lemma 5.5 One has
f 1 ( α ) 2 ( 1 + 2 κ ) δ 2 ψ ( λ min ( v ) 2 α 1 + 2 κ δ ) .
Proof From Lemma 2.1 and Lemma 5.4, we have
λ i ( ( v + α d x ) ( j ) ) λ min ( ( v + α d x ) ( j ) ) λ min ( v ( j ) ) α d x ( j ) F λ min ( v ( j ) ) 2 α 1 + 2 κ δ , λ i ( ( v + α d s ) ( j ) ) λ min ( ( v + α d s ) ( j ) ) λ min ( v ( j ) ) α d s ( j ) F λ min ( v ( j ) ) 2 α 1 + 2 κ δ .
Let
d x ( j ) = i = 1 r j d x i ( j ) c i ( j ) + i < m j d x i m j ( j ) , j = 1 , , N ,
be the Peirce decomposition of d x ( j ) with respect to the Jordan frame { c 1 ( j ) , , c r j ( j ) } , and let
d s ( j ) = i = 1 r j d s i ( j ) b i ( j ) + i < m j d s i m j ( j ) , j = 1 , , N ,
be the Peirce decomposition of d s ( j ) with respect to the Jordan frame { b 1 ( j ) , , b r j ( j ) } . We have
d x F 2 = j = 1 N d x ( j ) F 2 = j = 1 N d x ( j ) , d x ( j ) = j = 1 N ( i = 1 r j ( d x i ( j ) ) 2 + i < m j tr ( ( d x i m j ( j ) ) 2 ) )
and
d s F 2 = j = 1 N d s ( j ) F 2 = j = 1 N d s ( j ) , d s ( j ) = j = 1 N ( i = 1 r j ( d s i ( j ) ) 2 + i < m j tr ( ( d s i m j ( j ) ) 2 ) ) .
Since ψ ( t ) is monotonically decreasing in t ( 0 , + ) , we have, by (57),
f 1 ( α ) 1 2 ψ ( λ min ( v ) 2 α 1 + 2 κ δ ) j = 1 N ( i = 1 r j ( d x i ( j ) ) 2 + i < m j tr ( ( d x i m j ( j ) ) 2 ) ) + 1 2 ψ ( λ min ( v ) 2 α 1 + 2 κ δ ) j = 1 N ( i = 1 r j ( d s i ( j ) ) 2 + i < m j tr ( ( d s i m j ( j ) ) 2 ) ) 1 2 ψ ( λ min ( v ) 2 α 1 + 2 κ δ ) ( d x F 2 + d s F 2 ) 2 ( 1 + 2 κ ) δ 2 ψ ( v min 2 α 1 + 2 κ δ ) .

The last inequality holds due to the fact that (62). This completes the proof of the lemma. □

From this point on, the analysis of the algorithm follows almost completely the similar analyses in [17, 22] with straightforward modifications that take into account the Cartesian P ( κ ) -property. Therefore, the intermediate results are omitted and only main results are mentioned without the proofs.

In particular, the step size α satisfies the following condition:
α 1 ( 1 + 2 κ ) ψ ( ρ ( L δ ) ) .
(63)
It follows from (63) and the definition of ρ that
α 1 ( 1 + 2 κ ) ( 1 + σ e σ ( 1 t ) ) , t [ 1 σ , 1 ]  and  e σ ( 1 t ) t = 4 δ .
(64)
Using the second equation of (64), we have
e σ ( 1 t ) = t + 4 δ 1 + 4 δ .
It follows from Corollary 3.8 and Ψ ( v ) 1 that
δ 1 6 Ψ ( v ) 1 6 .
Hence, we have
α 1 σ ( 1 + 2 κ ) ( 1 + e σ ( 1 t ) ) 1 2 σ ( 1 + 2 κ ) ( 1 + 2 δ ) 1 16 σ δ ( 1 + 2 κ ) .
In the sequel, we use the notation
α ˜ = 1 16 σ δ ( 1 + 2 κ ) ,
(65)

and we will use α ˜ as the default step size. It is obvious that α α ˜ .

Now, to validate the above analysis, we need to show that α ˜ satisfies (56). In fact, from Lemmas 2.1, 3.2, 5.4 and (65), we have
λ min ( v + α ˜ d x ) λ min ( v ) α ˜ d x 3 2 σ 1 16 σ δ ( 1 + 2 κ ) 2 1 + 2 κ δ 3 2 σ 1 8 σ 11 8 σ 1 σ
and
λ min ( v + α ˜ d s ) λ min ( v ) α ˜ d s 3 2 σ 1 16 σ δ ( 1 + 2 κ ) 2 1 + 2 κ δ 3 2 σ 1 8 σ 11 8 σ 1 σ .

5.3 Decrease of the value of Ψ ( v ) during an inner iteration

In what follows, we will show that the barrier function Ψ ( v ) in each inner iteration with the default step size α ˜ , as defined by (65), is decreasing. For this, we need the following technical result.

Lemma 5.6 (Lemma 3.12 in [19])

Let h ( t ) be a twice differentiable convex function with h ( 0 ) = 0 , h ( 0 ) < 0 and let h ( t ) attain its (global) minimum at t > 0 . If h ( t ) is increasing for t [ 0 , t ] , then
h ( t ) t h ( 0 ) 2 , 0 t t .

As a consequence of Lemma 5.6 and the fact that f ( α ) f 1 ( α ) , which is a twice differentiable convex function with f 1 ( 0 ) = 0 , and f 1 ( 0 ) = 2 δ 2 < 0 , we can easily prove the following lemma.

Lemma 5.7 If the step size α is such that α α ˜ , then
f ( α ) α δ 2 .

The following theorem states the results which show that the default step size (65) yields sufficient decrease of the barrier function value during each inner iteration.

Theorem 5.8 One has
f ( α ˜ ) ( Ψ ( v ) ) 1 2 96 σ ( 1 + 2 κ ) .
Proof From Lemma 5.7, Corollary 3.8 and (65), we have
f ( α ˜ ) α ˜ δ 2 = 1 16 σ δ ( 1 + 2 κ ) δ 2 = δ 16 σ ( 1 + 2 κ ) ( Ψ ( v ) ) 1 2 96 σ ( 1 + 2 κ ) .

This completes the proof of the theorem. □

6 Complexity of the algorithm

In this section, we first derive an upper bound for the number of the iteration bounds by our algorithm. Then we obtain the iteration bounds that match the currently best known iteration bounds for large- and small-update methods, respectively.

6.1 Iteration bound for a large-update method

For the analysis of the iterations of the algorithm, we need to count how many inner iterations are required to return to the situation where Ψ ( v ) τ . We denote the value of Ψ ( v ) after the μ-update as Ψ 0 , the subsequent values in the same outer iteration are denoted as Ψ k , k = 1 , , K , where K denotes the total number of inner iterations in the outer iteration. According to the decrease of f ( α ˜ ) , we get
Ψ k + 1 Ψ k β ( Ψ k ) 1 γ , k = 0 , 1 , , K 1 ,
(66)

where β = 1 96 σ ( 1 + 2 κ ) and γ = 1 2 .

Lemma 6.1 (Lemma 14 in [19])

Suppose that t 0 , t 1 , , t K is a sequence of positive numbers such that
t k + 1 t k β t k 1 γ , k = 0 , 1 , , K 1 ,

where β > 0 and 0 < γ 1 . Then K t 0 γ β γ .

Combining Lemma 6.1 and (66), we can easily verify the following main result.

Theorem 6.2 One has
K 192 σ ( 1 + 2 κ ) ( Ψ 0 ) 1 2 .
By applying Corollary 5.2, (34), and the fact that ψ ( t ) t 2 2 when t 1 , we have
Ψ 0 r ψ ( ϱ ( τ r ) 1 θ ) r ψ ( 2 τ r + 2 + σ σ 1 θ ) r 2 ( 1 θ ) ( 2 τ r + 2 + σ σ ) .

From the above expression with θ = Θ ( 1 ) and τ = O ( r ) , and also applying Lemma 3.2, we can conclude that σ = O ( log r ) .

The number of outer iterations is bounded above by 1 θ log r ε (cf. Lemma Π.17 in [13]). By multiplying the number of outer iterations and the number of inner iterations, we get an upper bound for the total number of iterations, namely
192 σ ( 1 + 2 κ ) θ r 2 ( 1 θ ) ( 2 τ r + 2 + σ σ ) log r ε .

After some elementary reductions, we have the following theorem, which gives the currently best known iteration bound for the large-update method.

Theorem 6.3 For the large-update method, which is characterized by θ = Θ ( 1 ) and τ = O ( r ) , then the algorithm requires at most
O ( ( 1 + 2 κ ) r log r log r ε )

iterations. The output gives a ε-approximate solution of the Cartesian P ( κ ) -SCLCP.

6.2 Iteration bound for a small-update method

It is not hard to show that if the above analysis is used for a small-update method, the iteration bound would not be as good as it can be for these types of methods. For the analysis of the iteration bound of a small-update method, we need to estimate the upper bound of Ψ 0 more accurately. It should be noted that the following analysis only holds for σ 2 .

By applying Corollary 5.2, (35), Lemma 3.4, and the fact that 1 1 θ = θ 1 + 1 θ θ , we have
Ψ 0 r ψ ( ϱ ( τ r ) 1 θ ) r ψ ( 1 + τ r ( 2 τ r + 2 + σ σ ) 1 4 1 θ ) r ( 1 + σ ) 2 ( 1 + τ r ( 2 τ r + 2 + σ σ ) 1 4 1 θ 1 ) 2 1 + σ 2 ( 1 θ ) ( θ r + τ ( 2 τ r + 2 + σ σ ) 1 4 ) 2 .
From the above expression with θ = Θ ( 1 r ) and τ = O ( 1 ) , and also applying Lemma 3.2, we can conclude that σ = O ( 1 ) . It follows from Theorem 6.2 that the total number of iterations is bounded above by
192 σ 1 + σ ( 1 + 2 κ ) θ 2 ( 1 θ ) ( θ r + τ ( 2 τ r + 2 + σ σ ) 1 4 ) log r ε .

After some elementary reductions, we have the following theorem, which gives the currently best known iteration bound for a small-update method.

Theorem 6.4 For a small-update method, which is characterized by θ = Θ ( 1 r ) and τ = O ( 1 ) , then the algorithm requires at most
O ( ( 1 + 2 κ ) r log r ε )

iterations. The output gives a ε-approximate solution of the Cartesian P ( κ ) -SCLCP.

7 Conclusions and remarks

In this paper, we have shown that primal-dual IPMs for LO [17] and P ( κ ) -LCP [22] based on the finite barrier can be extended to the context of the Cartesian P ( κ ) -SCLCP. The iteration bounds for large- and small-update methods are obtained, namely O ( ( 1 + 2 κ ) r log r log r ε ) and O ( ( 1 + 2 κ ) r log r ε ) , respectively. In both cases, we were able to match the best known iteration bounds for these types of methods. Moreover, this unifies the analysis for the P ( κ ) -LCP, the Cartesian P ( κ ) -SOCLCP, and the Cartesian P ( κ ) -SDLCP.

Some interesting topics for further research remain. One possible topic is to investigate whether it is possible to replace NT-scaling scheme by some other scaling schemes and still obtain polynomial-time iteration bounds. Another worthwhile direction for further research may be the development of infeasible kernel-based IPMs for SCLCP.

Endnote

a It may be worth mentioning that if we use the kernel function of the classical logarithmic barrier function, i.e., ψ ( t ) = 1 2 ( t 2 1 ) log t , then ψ ( t ) = t t 1 , whence ψ ( v ) = v 1 v , and hence system (52) then coincides with the classical system (51).

Declarations

Acknowledgements

This work was supported by the National Natural Science Foundation of China (No. 11001169), China Postdoctoral Science Foundation funded project (No. 2012T50427) and Connotative Construction Project of Shanghai University of Engineering Science (No. NHKY-2012-13).

Authors’ Affiliations

(1)
College of Fundamental Studies, Shanghai University of Engineering Science, Shanghai, P.R. China
(2)
College of Advanced Vocational Technology, Shanghai University of Engineering Science, Shanghai, P.R. China

References

  1. Faybusovich L: Linear system in Jordan algebras and primal-dual interior-point algorithms. J. Comput. Appl. Math. 1997, 86: 149–175. 10.1016/S0377-0427(97)00153-2MathSciNetView ArticleMATHGoogle Scholar
  2. Yoshise A: Complementarity problems over symmetric cones: a survey of recent developments in several aspects. International Series in Operational Research and Management Science 166. In Handbook on Semidefinite, Conic and Polynomial Optimization: Theory, Algorithms, Software and Applications. Edited by: Anjos MF, Lasserre JB. Springer, New York; 2012:339–376.View ArticleGoogle Scholar
  3. Faybusovich L: Euclidean Jordan algebras and interior-point algorithms. Positivity 1997, 1: 331–357. 10.1023/A:1009701824047MathSciNetView ArticleMATHGoogle Scholar
  4. Potra FA: Potra, FA: An infeasible interior point method for linear complementarity problems over symmetric cones. In: Simos, T (ed.) Proceedings of the 7th International Conference of Numerical Analysis and Applied Mathematics, Rethymno, Crete, Greece, 18-22 September 2009, pp. 1403-1406. Am. Inst. of Phys., New York (2009)Google Scholar
  5. Yoshise A: Interior point trajectories and a homogeneous model for nonlinear complementarity problems over symmetric cones. SIAM J. Optim. 2006, 17: 1129–1153.MathSciNetView ArticleMATHGoogle Scholar
  6. Chen X, Qi HD: Cartesian P -property and its applications to the semidefinite linear complementarity problem. Math. Program. 2006, 106: 177–201. 10.1007/s10107-005-0601-8MathSciNetView ArticleMATHGoogle Scholar
  7. Pan SH, Chen JS:A regularization method for the second-order cone complementarity problem with the Cartesian P 0 -property. Nonlinear Anal. 2009, 70: 1475–1491. 10.1016/j.na.2008.02.028MathSciNetView ArticleMATHGoogle Scholar
  8. Luo ZY, Xiu NH:Path-following interior point algorithms for the Cartesian P ( κ ) -LCP over symmetric cones. Sci. China Ser. A 2009, 52: 1769–1784.MathSciNetView ArticleMATHGoogle Scholar
  9. Kojima M, Megiddo N, Noma T, Yoshise A Lecture Notes in Computer Science 538. In A Unified Approach to Interior Point Algorithms for Linear Complementarity Problems. Springer, New York; 1991.View ArticleGoogle Scholar
  10. Wang GQ, Bai YQ: A class of polynomial interior-point algorithms for the Cartesian P-matrix linear complementarity problem over symmetric cones. J. Optim. Theory Appl. 2012, 152: 739–772. 10.1007/s10957-011-9938-8MathSciNetView ArticleMATHGoogle Scholar
  11. Lesaja L, Wang GQ, Zhu DT:Interior-point methods for Cartesian P ( κ ) -linear complementarity problems over symmetric cones based on the eligible kernel functions. Optim. Methods Softw. 2012, 27: 827–843. 10.1080/10556788.2012.670858MathSciNetView ArticleMATHGoogle Scholar
  12. Wang GQ, Lesaja G:Full Nesterov-Todd step feasible interior-point method for the Cartesian P ( κ ) -SCLCP. Optim. Methods Softw. 2013, 28: 600–618. 10.1080/10556788.2013.781600MathSciNetView ArticleMATHGoogle Scholar
  13. Roos C, Terlaky T, Vial J-P: Theory and Algorithms for Linear Optimization. An Interior-Point Approach. Wiley, Chichester; 1997. (2nd edn., Springer, New York (2006))MATHGoogle Scholar
  14. Gu G, Zangiabadi M, Roos C: Full Nesterov-Todd step interior-point method for symmetric optimization. Eur. J. Oper. Res. 2011, 214: 473–484. 10.1016/j.ejor.2011.02.022MathSciNetView ArticleMATHGoogle Scholar
  15. Liu LX, Liu SY, Wang CF:Smooth Newton methods for symmetric cone linear complementarity problem with the Cartesian P / P 0 -property. J. Ind. Manag. Optim. 2011, 7: 53–66.MathSciNetView ArticleMATHGoogle Scholar
  16. Huang ZH, Lu N:Global and global linear convergence of a smooth algorithm for the Cartesian P ( κ ) -SCLCP. J. Ind. Manag. Optim. 2012, 8: 67–86.MathSciNetView ArticleGoogle Scholar
  17. Bai YQ, Ghami ME, Roos C: A new efficient large-update primal-dual interior-point method based on a finite barrier. SIAM J. Optim. 2003, 13: 766–782.View ArticleMathSciNetMATHGoogle Scholar
  18. Bai YQ, Ghami ME, Roos C: A comparative study of kernel functions for primal-dual interior-point algorithms in linear optimization. SIAM J. Optim. 2004, 15: 101–128. 10.1137/S1052623403423114MathSciNetView ArticleMATHGoogle Scholar
  19. Peng J, Roos C, Terlaky T: Self-regular functions and new search directions for linear and semidefinite optimization. Math. Program. 2002, 93: 129–171. 10.1007/s101070200296MathSciNetView ArticleMATHGoogle Scholar
  20. Ghami ME, Ivanov I, Melissen JBM, Roos C, Steihaug T: A polynomial-time algorithm for linear optimization based on a new class of kernel functions. J. Comput. Appl. Math. 2009, 224: 500–513. 10.1016/j.cam.2008.05.027MathSciNetView ArticleMATHGoogle Scholar
  21. Ghami ME, Roos C, Steihaug T: A generic primal-dual interior-point method for semidefinite optimization based on a new class of kernel functions. Optim. Methods Softw. 2010, 25: 387–403. 10.1080/10556780903239048MathSciNetView ArticleMATHGoogle Scholar
  22. Wang GQ, Bai YQ:Polynomial interior-point algorithms for P ( κ ) horizontal linear complementarity problem. J. Comput. Appl. Math. 2009, 233: 248–263. 10.1016/j.cam.2009.07.014MathSciNetView ArticleMATHGoogle Scholar
  23. Faraut J, Korányi A: Analysis on Symmetric Cones. Oxford University Press, New York; 1994.MATHGoogle Scholar
  24. Baes M: Convexity and differentiability properties of spectral functions and spectral mappings on Euclidean Jordan algebras. Linear Algebra Appl. 2007, 422: 664–700. 10.1016/j.laa.2006.11.025MathSciNetView ArticleMATHGoogle Scholar
  25. Choi BK, Lee GM: New complexity analysis for primal-dual interior-point methods for self-scaled optimization problems. Fixed Point Theory Appl. 2012., 2012: Article ID 213Google Scholar
  26. Faybusovich L: A Jordan-algebraic approach to potential-reduction algorithms. Math. Z. 2002, 239: 117–129. 10.1007/s002090100286MathSciNetView ArticleMATHGoogle Scholar
  27. Korányi A: Monotone functions on formally real Jordan algebras. Math. Ann. 1984, 269: 73–76. 10.1007/BF01455996MathSciNetView ArticleMATHGoogle Scholar
  28. Schmieta SH, Alizadeh F: Extension of primal-dual interior-point algorithms to symmetric cones. Math. Program. 2003, 96: 409–438. 10.1007/s10107-003-0380-zMathSciNetView ArticleMATHGoogle Scholar
  29. Vieira, MVC: Jordan algebraic approach to symmetric optimization. PhD thesis, Electrical Engineering, Mathematics and Computer Science, Delft University of Technology, The Netherlands (2007)Google Scholar
  30. Nesterov YE, Todd MJ: Self-scaled barriers and interior-point methods for convex programming. Math. Oper. Res. 1997, 22: 1–42. 10.1287/moor.22.1.1MathSciNetView ArticleMATHGoogle Scholar
  31. Nesterov YE, Todd MJ: Primal-dual interior-point methods for self-scaled cones. SIAM J. Optim. 1998, 8: 324–364. 10.1137/S1052623495290209MathSciNetView ArticleMATHGoogle Scholar

Copyright

© Wang et al.; licensee Springer 2013

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Advertisement