Skip to main content

Kernel-function-based primal-dual interior-point methods for convex quadratic optimization over symmetric cone

Abstract

In this paper, we give a unified computational scheme for the complexity analysis of kernel-function-based primal-dual interior-point methods for convex quadratic optimization over symmetric cone. By using Euclidean Jordan algebras, the currently best-known iteration bounds for large- and small-update methods are derived, namely, O( r logrlog r ε ) and O( r log r ε ), respectively. Furthermore, this unifies the analysis for a wide class of conic optimization problems.

MSC:90C25, 90C51.

1 Introduction

Since the groundbreaking paper of Karmarkar, many researchers have proposed and analyzed various interior-point methods (IPMs) for linear optimization (LO) and a large amount of results have been reported [14]. However, there is a gap between the practical behavior of the IPMs and the theoretical performance results. The so-called small-update IPMs enjoy the best-known worst-case iteration bound O( n log n ε ) but their performance in computational practice is poor. In practice, however, the so-called large-update IPMs are much more efficient than small-update IPMs but with relatively weak theoretical result O(nlog n ε ). Recently, Peng et al. [5] introduced so-called self-regular barrier functions for primal-dual IPMs for LO, the iteration bound for large-update methods for LO was improved from O(nlog n ε ) to O( n lognlog n ε ), which almost closes the gap between the iteration bounds for large- and small-update methods. Bai et al. [6] presented a large class of eligible kernel functions, which is fairly general and includes the classical logarithmic function and the self-regular functions, as well as many non-self-regular functions as special cases. The best-known iteration bounds for LO obtained are as good as the ones in [5] for appropriate choices of the eligible kernel functions. For some other related kernel-based IPMs we refer to [727].

In this paper, we present a unified kernel-function approach to primal-dual IPMs for convex quadratic optimization over symmetric cone (CQSCO), which is a generalization of symmetric cone optimization (SCO) (when Q=0), which contains LO, second-order cone optimization (SOCO) and semidefinite optimization (SDO) as special case. CQSCO also includes convex quadratic optimization (CQO) and convex quadratic semidefinite optimization (CQSDO). Let (V,) be an n-dimensional Euclidean Jordan algebra (EJA) with rank r equipped with the standard inner product x,s=tr(xs), and K be the corresponding symmetric cone. The primal problem of CQSCO is given by

min f ( x ) = 1 2 x , Q ( x ) + c , x s.t.  A ( x ) = b , x K ,
(P)

where cV and b R m are given data, A:V R m is a given linear map, and Q is a given self-adjoint positive semidefinite (with respect to ,) linear operator on V, i.e., for any x,sV, then Q(x),s=x,Q(s) and Q(x),x0. The dual problem of (P) is given by

max 1 2 x , Q ( x ) + b T y s.t.  A T ( y ) + s = f ( x ) = Q ( x ) + c , s K ,
(D)

where A T is the adjoint of A. Many researchers have studied CQSCO and achieved plentiful and beautiful results. For an overview of these results we refer to [2834].

Without loss of generality, we assume that the linear map A is surjective, which implies that A A T is nonsingular. Furthermore, we also assume that both (P) and (D) satisfy the interior-point condition (IPC), i.e., there exists ( x 0 , y 0 , s 0 ) such that

A ( x 0 ) =b, x 0 intK, A T ( y 0 ) + s 0 Q ( x 0 ) =c, s 0 intK.

The perturbed Karush-Kuhn-Tucker optimality conditions for the problems (P) and (D) are given as follows:

A ( x ) = b , x K , A T ( y ) + s Q ( x ) = c , s K , x s = μ e ,
(1)

where μ is a positive parameter that is to be driven to zero explicitly. Since the IPC holds and A is surjective, the parameterized system (1) has a unique solution (x(μ),y(μ),s(μ)) for each μ>0, and we call x(μ) the μ-center of (P) and (y(μ),s(μ)) the μ-center of (D). The set of μ-centers gives a homotopy path (with μ running through all the positive real numbers), which is called the central path. If μ0, then the limit of the central path exists and since the limit points satisfy the complementarity condition, i.e., xs=0, it naturally yields an optimal solution for (P) and (D) (see, e.g., [29, 35]).

IPMs follow the central path approximately and find an approximate solution of the underlying problems (P) and (D) as μ go to zero. Just like the case of a linear SDO, linearizing the third equation in (1) may not lead to an element in V. Thus it is necessary to symmetrize that equation before linearizing it. For this purpose, we can apply the following scaling scheme (cf. Lemma 28 in [36]): Let uintK. Then

xs=μeP(u)xP ( u 1 ) s=μe.

Thus, we replace the third equation of the system (1) by

P(u)xP ( u 1 ) s=μe.

Applying Newton’s method, and neglecting the term P(u)ΔxP( u 1 )Δs, we have

A ( Δ x ) = 0 , A T ( Δ y ) + Δ s Q ( Δ x ) = 0 , P ( u ) x P ( u 1 ) Δ s + P ( u 1 ) s P ( u ) Δ x = μ e P ( u ) x P ( u 1 ) s .
(2)

The appropriate choices of u that lead to obtaining the unique search directions from the above system are called commutative class of search directions (see, e.g., [36]). In this paper, we consider the so-called NT-scaling scheme, the resulting direction is called NT search direction. This scaling scheme was first proposed by Nesterov and Todd [37, 38] for self-scaled cones and then adapted by Faybusovich [35, 39] for symmetric cones.

Lemma 1.1 (Lemma 3.2 in [39])

Let x,sintK. Then there exists a unique wintK such that

x=P(w)s.

Moreover,

w=P ( x ) 1 2 ( P ( x 1 2 ) s ) 1 2 [ = P ( s 1 2 ) ( P ( s 1 2 ) x ) 1 2 ] .

The point w is called the scaling point of x and s (in this order). As a consequence there exists v ˜ intK such that

v ˜ =P ( w ) 1 2 x=P ( w ) 1 2 s.
(3)

Let u= w 1 2 , where w is the NT-scaling point of x and s. We define

v:= P ( w ) 1 2 x μ [ = P ( w ) 1 2 s μ ] ,
(4)

and the scaled search directions as follows:

d x := P ( w ) 1 2 Δ x μ and d s := P ( w ) 1 2 Δ s μ .
(5)

It follows from (4) and (5) that

A ¯ ( d x ) = 0 , A ¯ T ( Δ y ) + d s Q ¯ ( d x ) = 0 , d x + d s = v 1 v ,
(6)

where A ¯ = A P ( w ) 1 2 μ and Q ¯ =P ( w ) 1 2 QP ( w ) 1 2 . We can easily verify that the system (6) has a unique solution (see, e.g., [29, 35]).

In this paper, we replace the right-hand side of the third equation in (6) by ψ (v), i.e., Ψ(v), as defined by (26) (see Section 3), where ψ(t) is any eligible kernel function. This yields the following system:

A ¯ ( d x ) = 0 , A ¯ T ( Δ y ) + d s Q ¯ ( d x ) = 0 , d x + d s = ψ ( v ) .
(7)

Since (7) has the same matrix of coefficients as (6), also (7) has a unique solution.a It follows that the eligible kernel function ψ (t) determines in a natural way search directions for an interior-point algorithm.

The new search directions d x and d s are computed by solving (7), thus Δx and Δs are obtained from (5). If (x,y,s)(x(μ),y(μ),s(μ)), then (Δx,Δy,Δs) is nonzero. The new iteration point is obtained according to

x + :=x+αΔx, y + :=y+αΔyand s + :=s+αΔs.
(8)

Similarly to the LO case, we require that the step size α should be taken so that the proximity measure function Ψ(v) decreases sufficiently. A default bound for such a step size α will be given later by (38).

Furthermore, we can conclude that

xs=μev=eΨ(v)=0Ψ(v)=0.
(9)

Hence, the value of Ψ(v) can be considered as a measure for the distance between the given iterate (x,y,s) and the corresponding μ-center (x(μ),y(μ),s(μ)).

The algorithm considered in this paper is described in Figure 1.

Figure 1
figure 1

Algorithm.

Given any eligible kernel function ψ(t), the parameters τ, θ and the step size α should be chosen in such a way that the algorithm is ‘optimized’ in the sense that the number of iterations required by the algorithm is as small as possible. We will prove that the resulting iteration bounds depend on the eligible kernel functions in Section 5.

The purpose of the paper is to propose a unified analysis of kernel-function-based primal-dual IPMs for CQSCO and give a general scheme on how to calculate the iteration bounds for the entire class of eligible kernel functions. The obtained complexity results match the best-known iteration bounds known for large-update methods, O( r logrlog r ε ) and small-update methods, O( r log r ε ). The order of the iteration bounds are derived as good as the ones for the LO case except that n is replaced by r, the rank of EJA. Although expected, these results were not obvious and, at certain steps of the analysis, they were not trivial and/or straightforward generalization of the LO case. Furthermore, this unifies the analysis for a wide class of conic optimization problems, which includes LO, CQO, SOCO, SDO, CQSDO, SCO and so on.

The outline of the paper is as follows. In Section 2, we provide some basic concepts and useful results on EJAs and symmetric cones. In Section 3, we recall and develop some useful properties of the eligible kernel functions and the corresponding barrier functions. In Section 4, we uniformly analyze the primal-dual IPMs for CQSCO. In Section 5, we derive the complexity bounds for large- and small-update methods. In Section 6, we report some preliminary numerical experiments. Finally, some conclusions and remarks are made in Section 7.

The following notations are used throughout the paper. R n , R + n , and R + + n denote the set of all vectors (with n components), the set of non-negative vectors and the set of positive vectors, respectively. R m × n is the space of all m×n matrices. S n , S + n and S + + n denote the cones of symmetric, symmetric positive semidefinite and symmetric positive definite n×n matrices, respectively. We use the matrix inner product AB=tr( A T B), i.e., the trace of the matrix A T B. The largest eigenvalue and the smallest eigenvalue of x are defined by λ max (x) and λ min (x), respectively. The Löwner partial ordering ‘ K ’ of V defined by a symmetric cone K is defined by x K s if xsK. The interior of K is denoted as intK and we write x K s if xsintK. Finally, if g(x)0 is a real-valued function of a real non-negative variable, the notation g(x)=O(x) means that g(x) c ¯ x for some positive constant c ¯ , and g(x)=Θ(x) that c 1 xg(x) c 2 x for the two positive constants c 1 and  c 2 .

2 Preliminaries

For any x,yV, the Lyapunov transformation L(x) and the quadratic representation P(x) are given by

L(x)y:=xy
(10)

and

P(x):=2L ( x ) 2 L ( x 2 ) ,
(11)

where L ( x ) 2 =L(x)L(x), respectively.

For any EJA V, the corresponding cone of squares

K(V):= { x 2 : x V }
(12)

is indeed a symmetric cone (cf. Theorem III.2.1 in [40]). In the sequel, K will always denote a symmetric cone, and V an EJA with rank(V)=r for which K is its cone of squares.

The following theorem gives an important decomposition, the spectral decomposition, on the space V.

Theorem 2.1 (Theorem III.1.2 in [40])

Let xV. Then there exists a Jordan frame { c 1 ,, c r } and real numbers λ 1 (x),, λ r (x) such that

x= i = 1 r λ i (x) c i .
(13)

The numbers λ i (x) (with their multiplicities) are called the eigenvalues of x. Furthermore, the trace and the determinant of x are given by

tr(x)= i = 1 r λ i (x)anddet(x)= i = 1 r λ i (x),

respectively.

Let xK with the spectral decomposition given by (13), the vector-valued function ψ(x) is defined by

ψ(x):=ψ ( λ 1 ( x ) ) c 1 ++ψ ( λ r ( x ) ) c r .
(14)

Furthermore, if ψ(t) is differentiable, the derivative ψ (t) exists, and we also have the vector-valued function ψ (x), namely

ψ (x)= ψ ( λ 1 ( x ) ) c 1 ++ ψ ( λ r ( x ) ) c r .
(15)

It should be noted that ψ (x) is just a vector-valued function induced by the derivative ψ (t) of the function ψ(t) rather than the derivative of the vector-valued function ψ(x) defined by (14).

The following theorem provides another important decomposition, the Peirce decomposition, on the space V.

Theorem 2.2 (Theorem IV.2.1 in [40])

Let xV with the spectral decomposition given by (13). Then we have

V= i j V i j ,

where

V i i :={x|x c i =x}and V i j := { x | x c i = 1 2 x = x c j } ,1i<jr,

are Peirce spaces of V. Then, for any xV, there exist x i R, c i V i i , and x i j V i j (i<j) such that

x= i = 1 r x i c i + i < j x i j .

For any x,sV, we define

x,s:=tr(xs),
(16)

and we refer to it as the trace inner product. The Frobenius norm induced by this trace inner product, namely F , is defined by

x F := x , x .
(17)

Thus, we have

x F = tr ( x 2 ) = i = 1 r λ i 2 ( x ) .
(18)

Furthermore, we can easily verify that

| λ min (x)| x F and| λ max (x)| x F .
(19)

Lemma 2.3 (Lemma 14 in [36])

Let x,sV. Then

λ min (x+s) λ min (x)+ λ min (s) λ min (x) s F

and

λ max (x+s) λ max (x)+ λ max (s) λ max (x)+ s F .

Let f:DR be a univariate function on the open set DR that is differentiable or even continuously differentiable if necessary, and x= i = 1 r λ i (x) c i be the spectral decomposition of xV with respect to the Jordan frame { c 1 ,, c r }. The real-valued separable spectral function F:VR and the vector-valued separable spectral function G:VV are defined by

F(x):= i = 1 r f ( λ i ( x ) )
(20)

and

G(x):= i = 1 r f ( λ i ( x ) ) c i ,
(21)

respectively.

The following two theorems give explicitly the first derivatives of F(x) and G(x), respectively.

Theorem 2.4 (Theorem 38 in [41])

Let f is continuously differentiable in D. Then F(x) is continuously differentiable at x and

D x F(x)= i = 1 r f ( λ i ( x ) ) c i .

Theorem 2.5 (Lemma 1 in [42])

Let f is a continuously differentiable in D. Then G(x) is continuously differentiable at x and

D x G(x)= i = 1 r f ( λ i ( x ) ) x i c i + i < j λ i ( x ) = λ j ( x ) f ( λ i ( x ) ) x i j + i < j λ i ( x ) λ j ( x ) f ( λ i ( x ) ) f ( λ j ( x ) ) λ i ( x ) λ j ( x ) x i j ,

where 1i<jr.

3 Properties of the eligible kernel (barrier) functions

We call a univariate ψ:(0,)[0,) a kernel function [5] if it satisfies the following three conditions:

ψ (1)=ψ(1)=0,
(22a)
ψ (t)>0,
(22b)
lim t 0 ψ(t)= lim t ψ(t)=.
(22c)

This means that ψ(t) is strictly convex and minimal at t=1, with ψ(1)=0. Moreover, (22c) implies that ψ(t) has the barrier property.

In this paper, we consider the so-called eligible kernel function [6], i.e., the kernel function satisfies four of the following five conditions, namely the first and the last three conditions:

t ψ (t)+ ψ (t)>0,t<1,
(23a)
t ψ (t) ψ (t)>0,t>1,
(23b)
ψ (t)<0,t>0,
(23c)
2 ψ ( t ) 2 ψ (t) ψ (t)>0,t<1,
(23d)
ψ (t) ψ (βt)β ψ (t) ψ (βt)>0,t>1,β>1.
(23e)

Note that the first four conditions are logically independent, and the fifth condition is a consequence of (23b) and (23c). Since (23b) is much simpler to check than (23e), in many cases it is easy to know that ψ(t) is eligible if it satisfies the first four conditions [6].

The following lemma is cited from [6] to state the exponential convexity, which plays an important role in the analysis of kernel-function-based primal-dual IPMs [5, 6].

Lemma 3.1 (Lemma 2.1 in [6])

Let t 1 >0 and t 2 >0. Then

ψ( t 1 t 2 ) 1 2 ( ψ ( t 1 ) + ψ ( t 2 ) ) .

Now, we define the barrier function Ψ(v):intK R + as

Ψ(x,s,μ):=Ψ(v):=tr ( ψ ( v ) ) .
(24)

It follows from Theorem 2.1 and (14) that

Ψ(v)= i = 1 r ψ ( λ i ( v ) ) .
(25)

Furthermore, we have, by Theorem 2.4,

Ψ(v)= ψ (v):= ψ ( λ 1 ( v ) ) c 1 ++ ψ ( λ r ( v ) ) c r ,
(26)

where Ψ(v) denotes the derivative of the barrier function Ψ(v).

As a consequence of Lemma 3.1, we have the following important result.

Theorem 3.2 (Theorem 4.3.2 in [23])

Let x,sintK. Then

Ψ ( ( P ( x ) 1 / 2 s ) 1 / 2 ) 1 2 ( Ψ ( x ) + Ψ ( s ) ) .

Note that during the course of the algorithm the largest values of Ψ(v) occur just after the update of μ. So next we derive an estimate for the effect of a μ-update on the value of Ψ(v). It follows from (24) that

Ψ(βv)= i = 1 r ψ ( β λ i ( v ) ) .

By applying Theorem 3.2 in [6], with x being the vector in R r consisting of all the eigenvalues of the symmetric cone v, the theorem below immediately follows.

Theorem 3.3 Let vintK and β1. Then

Ψ(βv)rψ ( β ϱ ( Ψ ( v ) r ) ) .

Corollary 3.4 Let 0θ<1 and v + = v 1 θ . If Ψ(v)τ, then

Ψ( v + )rψ ( ϱ ( τ r ) 1 θ ) .

Proof With β= 1 1 θ 1 and Ψ(v)τ, the corollary follows immediately from Theorem 3.3. □

The norm-based proximity measure δ(v):intK R + is defined by

δ(v):= 1 2 Ψ ( v ) F .
(27)

It follows from (17) and (26) that

δ(v)= 1 2 ψ ( v ) F = 1 2 i = 1 r ψ ( λ i ( v ) ) 2 .
(28)

Hence, we can conclude that δ(v)0, and δ(v)=0 if and only if Ψ(v)=0.

It follows from (25) and (28) that δ(v) and Ψ(v) depend only on the eigenvalues λ i (v) of the symmetric cone V. This observation makes it possible to apply Theorem 4.8 in [6], with x being the vector in R r consisting of all the eigenvalues of the symmetric cone v. This gives the following theorem, which yields a lower bound on δ(v) in terms of Ψ(v).

Theorem 3.5 Let vintK. Then

δ(v) 1 2 ψ ( ϱ ( Ψ ( v ) ) ) .

In what follows, we consider the derivatives of the function Ψ(x(t)) with respect to t, where x(t)= x 0 +tuintK with tR and uV. It follows from Theorem 2.1 and Theorem 2.2 that the spectral decomposition of x(t) with respect to the Jordan frame { c 1 ,, c r } can be defined by

x(t)= i = 1 r λ i ( x ( t ) ) c i ,
(29)

and the Peirce decomposition of u can be defined by

u= i = 1 r u i c i + i < j u i j .
(30)

From Theorem 2.4 and Theorem 2.5, after some elementary reductions, we can derive the first two derivatives of the general function Ψ(x(t)) with respect to t as follows:

D t Ψ ( x ( t ) ) =tr ( D x Ψ ( x ( t ) ) x ( t ) ) =tr ( i = 1 r ψ ( λ i ( x ( t ) ) ) c i u )
(31)

and

D t 2 Ψ ( x ( t ) ) = i = 1 r ψ ( λ i ( x ( t ) ) ) ( u i ) 2 + i < j λ i ( x ( t ) ) = λ j ( x ( t ) ) ψ ( λ i ( x ( t ) ) ) tr ( ( u i j ) 2 ) + i < j λ i ( x ( t ) ) λ j ( x ( t ) ) ψ ( λ i ( x ( t ) ) ) ψ ( λ j ( x ( t ) ) ) λ i ( x ( t ) ) λ j ( x ( t ) ) tr ( ( u i j ) 2 ) .
(32)

The condition (23c) implies that ψ (t) is monotonically decreasing in t(0,+). Under the assumption that i<j implies λ i (x(t)) λ j (x(t)), we can conclude that

D t 2 Ψ ( x ( t ) ) i = 1 r ψ ( λ i ( x ( t ) ) ) ( u i ) 2 + i < j ψ ( λ j ( x ( t ) ) ) tr ( ( u i j ) 2 ) ,
(33)

which bounds the second-order derivative of Ψ(x(t)) with respect to t (see, e.g., [23]).

4 Analysis of the algorithms

From (8) and (5), after some elementary reductions, we have

x + = μ P ( w ( j ) ) 1 / 2 (v+α d x )and s + = μ P ( w ) 1 / 2 (v+α d s ).

Note that during an inner iteration the parameter μ is fixed. Hence, after the default step the new scaled vector v + is given by

v + =P ( w + ) 1 / 2 P ( w ) 1 / 2 (v+α d x )=P ( w + ) 1 / 2 P ( w ) 1 / 2 (v+α d s ),

where, according to Lemma 1.1,

w + =P ( x + ) 1 / 2 ( ( P ( x + ) 1 / 2 s + ) 1 / 2 ) .

To calculate a decrease of the barrier function Ψ(v) during an inner iteration it is standard to consider a decrease as a function of α defined by

f(α):=Ψ( v + )Ψ(v).

However, working with f(α) may not be easy because in general f(α) is not convex. Thus, we are searching for the convex function f 1 (α) that is an upper bound of f(α) and whose derivatives are easier to calculate than those of f(α). The key element in this process is replacing v + with a similar element that will allow the use of exponential-convexity of the barrier function. By Proposition 5.9.3 in [23], we have

v + ( P ( v + α d x ) 1 2 ( v + α d s ) ) 1 2

and therefore

Ψ( v + )=Ψ ( ( P ( v + α d x ) 1 2 ( v + α d s ) ) 1 2 ) .

Theorem 3.2 implies that

Ψ( v + ) 1 2 ( Ψ ( v + α d x ) + Ψ ( v + α d s ) ) .

Hence, we have

f(α) f 1 (α):= 1 2 ( Ψ ( v + α d x ) + Ψ ( v + α d s ) ) Ψ(v),

which means that f 1 (α) gives an upper bound for the decrease of the barrier function Ψ(v). Furthermore, we can easily verify that f(0)= f 1 (0)=0.

It follows from (31) that

f 1 (α)= 1 2 ( tr ( ψ ( v + α d x ) d x ) + tr ( ψ ( v + α d s ) d s ) ) .

This gives, by (7),

f 1 (0)= 1 2 tr ( ψ ( v ) ( d x + d s ) ) = 1 2 tr ( ψ ( v ) ψ ( v ) ) = 1 2 ψ ( v ) F 2 =2δ ( v ) 2 <0.

Let η=v+α d x and γ=v+α d s . To simplify the notations we used (and will use below), the following conventions:

d x i := ( d x ) i , d s i := ( d s ) i , d x i j := ( d x ) i j and d s i j := ( d s ) i j .
(34)

It follows directly from (32) and (33) that

f 1 ( α ) 1 2 i = 1 r ψ ( λ i ( η ) ) ( d x i ) 2 + i < j ψ ( λ j ( η ) ) tr ( ( d x i j ) 2 ) + 1 2 i = 1 r ψ ( λ i ( γ ) ) ( d s i ) 2 + i < j ψ ( λ j ( γ ) ) tr ( ( d s i j ) 2 ) .
(35)

In the sequel, we use the short notation δ:=δ(v).

Lemma 4.1 One has

d x F 2 + d s F 2 4 δ 2 .

Proof Since Q is a given self-adjoint positive semidefinite linear operator, we have

d x , d s = d x , Q ¯ ( d x ) A ¯ T ( Δ y ) = d x , ( P ( w ) 1 2 Q P ( w ) 1 2 ) ( d x ) d x , A ¯ T ( Δ y ) = P ( w ) 1 2 d x , Q ( P ( w ) 1 2 d x ) 0 .
(36)

Hence, we have

4 δ 2 = d x + d s F 2 = d x F 2 + d s F 2 +2 d x , d s d x F 2 + d s F 2 .
(37)

This completes the proof of the lemma. □

Similar to the proof of Lemma 4.1 in [6], we have the following lemma, which gives an upper bound of f 1 (α) in terms of δ and ψ (t).

Lemma 4.2 One has

f 1 (α)2 δ 2 ψ ( λ min ( v ) 2 α δ ) .

Let ρ(s):[0,)(0,1] be the inverse function of 1 2 ψ (t) for t1, where ψ(t) is the eligible kernel function. Similar to the LO case, the default step size is chosen. In this paper, we use

α ˜ := 1 ψ ( ρ ( 2 δ ) )
(38)

as the default step size.

In what follows, we will show that the barrier function Ψ(v) in each inner iteration with the default step size α ˜ , as defined by (38), is decreasing. For this, we need the following technical result.

Lemma 4.3 (Lemma 3.12 in [5])

Let h(t) be a twice differentiable convex function with h(0)=0, h (0)<0 and let h(t) attain its (global) minimum at t >0. If h (t) is increasing for t[0, t ], then

h(t) t h ( 0 ) 2 ,0t t .

As a consequence of Lemma 4.3 and the fact that f(α) f 1 (α), which is a twice differentiable convex function with f 1 (0)=0, and f 1 (0)=2 δ 2 <0, we can easily prove the following lemma.

Lemma 4.4 Let the step size α satisfies the condition α α ˜ . Then

f(α)α δ 2 .

Combining the results of Lemma 4.4 and (38), we have the following theorem, which shows that the default step size (38) yields a sufficient decrease of the barrier function value during each inner iteration.

Theorem 4.5 Let α ˜ be the default step size, as given by (38). Then

f( α ˜ ) δ 2 ψ ( ρ ( 2 δ ) ) .
(39)

By using the condition (23d), we can conclude that the right-hand side expression in (39) is monotonically decreasing in δ (cf. Lemma 4.7 in [6]). Thus, combining the results of Theorems 4.5 and 3.5, we have

f( α ˜ ) ( ψ ( ϱ ( Ψ ( v ) ) ) ) 2 4 ψ ( ρ ( ψ ( ρ ( Ψ ( v ) ) ) ) ) .
(40)

This expresses the decrease in Ψ(v) during an inner iteration completely in ψ, its first and second derivatives, and the inverse functions ρ and ϱ.

5 Complexity of the algorithms

In this section, we first derive an upper bound for the number of the iteration bounds by the algorithm depicted in Figure 1. Then we conclude this section by applying the iteration bound to a wide variety of kernel functions.

5.1 Iteration bounds for the algorithms

We need to count how many inner iterations are required to return to the situation where Ψ(v)τ. We use the value of Ψ(v) after the μ-update by Ψ 0 , and the subsequent values in the same outer iteration are denoted as Ψ k , k=1,2,,K, where K denotes the total number of inner iterations in the outer iteration.

Let the constants β>0 and γ(0,1] be such that for Ψ(v)τ. We have

( ψ ( ϱ ( Ψ ( v ) ) ) ) 2 4 ψ ( ρ ( ψ ( ϱ ( Ψ ( v ) ) ) ) ) βΨ ( v ) 1 γ .

Note that the left-hand side expression is increasing in Ψ(v). Therefore, such numbers β and γ certainly exist (take, e.g., γ=1 and β equals the value of the left-hand side expression for Ψ(v)=τ). In addition, the appropriate values of β and γ will vary for each eligible kernel function and finding them may not always be straightforward.

The following lemma provides an estimate for the number of inner iterations between two successive barrier parameter updates, in terms of Ψ 0 and the parameters β and γ.

Lemma 5.1 One has

K Ψ 0 γ β γ .

Proof The definition of K implies Ψ K 1 >τ and Ψ K τ and

Ψ k + 1 Ψ k β ( Ψ k ) 1 γ ,k=0,1,,K1.

Thus, the conclusion of the lemma follows immediately from Lemma 14 in [5] with t k = Ψ k . This completes the proof of the lemma. □

The number of outer iterations coincides with the number of barrier parameter θ updates until we obtain rμ<ε. It is well known (cf. Lemma Π.17 in [1]) that the number of outer iterations is bounded above by 1 θ log r ε . Thus, an upper bound on the total number of iterations is obtained by multiplying the number of outer iterations and the number of inner iterations.

Theorem 5.2 The total number of iterations is bounded above by

Ψ 0 γ θ β γ log r ε .

From Theorem 5.2 and Corollary 3.4, we obtain the iteration bound for the algorithm depicted in Figure 1 follows:

1 θ β γ ( r ψ ( ϱ ( τ r ) 1 θ ) ) γ log r ε ,
(41)

which means that the total number of iterations is completely determined by the parameters θ, β, γ, τ, and the eligible kernel function ψ(t).

5.2 Application to the eligible kernel functions

It follows from Theorem 5.2 that the iteration bound of the algorithms depends on the parameters β and γ and the upper bound on Ψ 0 . Since these are different for different eligible kernel functions, the iteration bounds will also vary. Similar to the analysis considered in [6], Section 6.1, for the LO case, the iteration bounds for large- and small-update methods based on the eligible kernel functions can be performed in a systematic way by using the following scheme.

  • Step 0: Input an eligible kernel function ψ(t); an update parameter θ, 0<θ<1; a threshold parameter τ; and an accuracy parameter ε.

  • Step 1: Solve the equation 1 2 ψ (t)=s to get ρ(s), the inverse function of 1 2 ψ (t), t(0,1]. If the equation is hard to solve, derive a lower bound for ρ(s).

  • Step 2: Calculate the decrease of Ψ(v) in terms of δ for the default step size α ˜ from

    f( α ˜ ) δ 2 ψ ( ρ ( 2 δ ) ) .
  • Step 3: Solve the equation ψ(t)=s to get ϱ(s), the inverse function of ψ(t), t1. If the equation is hard to solve, derive the lower and upper bounds for ϱ(s).

  • Step 4: Derive a lower bound for δ(v) in terms of Ψ(v) by using

    δ(v) 1 2 ψ ( ϱ ( Ψ ( v ) ) ) .
  • Step 5: Using the results of Step 3 and Step 4 find positive constants β and γ, with γ(0,1], such that

    f( α ˜ )βΨ ( v ) 1 γ .
  • Step 6: Calculate the uniform upper bound Ψ 0 for Ψ(v) from

    Ψ 0 L ψ (r,θ,τ):=rψ ( ϱ ( τ r ) 1 θ ) .
  • Step 7: Derive an upper bound for the total number of iterations from

    Ψ 0 γ θ β γ log r ε .
  • Step 8: Set τ=O(r) and θ=Θ(1) so as to calculate an iteration bound for large-update methods, or set τ=O(1) and θ=Θ( 1 r ) to get an iteration bound for small-update methods.

The resulting iteration bounds for a wide class of eligible kernel functions have been outlined in a series of papers [510, 14, 15, 1719, 21] starting with [6] for LO, we immediately get the iteration bounds for large- and small-update methods for CQSCO. The resulting iteration bounds are summarized in the 3rd and 4th columns of Table 1. For the detailed analysis of the algorithms can be refereed to the given references.

Table 1 Complexity results for the eligible kernel functions

Remark 5.3 For large-update methods, the currently best-known iteration bound is

O ( r log r log r ε ) .

In particular, for ψ 3 (t) and ψ 4 (t) this bound is obtained if we choose q= 1 2 logr, and for ψ 7 (t) and ψ 8 (t) this bound is obtained if we choose q=logr. The same bound is achieved for ψ 18 (t), also by taking p=1 and q= 1 2 logr.

Remark 5.4 For small-update methods, the currently best-known iteration bound is

O ( r log r ε ) .

In particular, for ψ 3 (t), ψ 4 (t), ψ 7 (t), ψ 8 (t), ψ 16 (t) and ψ 18 (t), this bound is derived is we take q=O(1).

Both for large- and small-update methods, the order of the iteration bounds are obtained as good as the bounds for the LO case except that n is replaced by r, the rank of the EJA. Thus, the iteration bounds are as good as they can be in the current state-of-the-art.

6 Numerical results

In this section, we report the computational performance of the algorithm depicted in Figure 1 for CQSDO, which is an important cases of CQSCO. The numerical experiments are carried out on a PC with Intel (R) Core (TM) i5-2500 Duo CPU at 3.30 GHz and 8 GB of physical memory. The PC runs MATLAB Version: 7.11.0.584 (R2010b) on a Windows 7 Enterprise 64-bit operating system.

We consider the primal problem of CQSDO in the standard form

min { 1 2 X Q ( X ) + C X : A i X = b i , i = 1 , 2 , , m , X 0 } ,

and its dual problem

max { 1 2 X Q ( X ) + b T y : i = 1 m y i A i Q ( X ) + S = C , S 0 } .

Here, Q: S n S n is a given self-adjoint positive semidefinite linear operator on S n , i.e., for any A,B S n , Q(A)B=AQ(B) and Q(A)A0. b R m is a given vector, C R n × n is a given matrix. Without loss of generality we assume that the matrices A i , i=1,2,,m are linearly independent and CQSDO satisfy the IPC. The detailed discussion and analysis of primal-dual IPMs for CQSDO can be found in [24, 28, 34].

Let us define

P:= X 1 2 ( X 1 2 S X 1 2 ) 1 2 X 1 2 [ = S 1 2 ( S 1 2 X S 1 2 ) 1 2 S 1 2 ] ,
(42)

and also we define D:= P 1 2 . This leads to the definition of the following variance matrix:

V:= 1 μ D 1 X D 1 [ = 1 μ D S D ] .
(43)

Furthermore, we define the scaled search directions as follows:

D X := 1 μ D 1 ΔX D 1 and D S := 1 μ DΔSD.
(44)

The scaled search direction ( D X ,Δy, D S ) is computed through solving the following linear system:

A ¯ i D X = 0 , i = 1 , 2 , , m , i = 1 m Δ y i A ¯ i Q ¯ ( D X ) + D S = 0 , D X + D S = V 1 V ,
(45)

where

A ¯ i := 1 μ D A i D,i=1,2,,mand Q ¯ ( D X ):=DQ(D D X D)D.

Then the new search direction (ΔX,Δy,ΔS) is obtained from (44). If (X,y,S)(X(μ),y(μ),S(μ)), then (ΔX,Δy,ΔS) is nonzero. The new iterate is obtained by taking a default step size α along the search directions as follows:

X + :=X+αΔX, y + :=y+αΔyand S + :=S+αΔS.
(46)

It should be noted that the default step size (38) selected during each inner iteration is small enough for analyzing the algorithm, while in practice it should be chosen large enough for the efficiency of the algorithm. In the following test problem, we choose the maximum allowed step size such that the next iterate satisfying the positive semidefiniteness condition, i.e., X+α D X 0 and S+α D S 0.

We consider the following special CQSDO example with Q(X)=E:

A 1 = ( 2 1 0 1 1 1 2 1 1 0 0 2 1 2 1 0 0 0 2 1 0 2 1 0 1 2 1 0 1 1 2 2 1 1 0 1 2 0 1 1 1 2 2 1 0 0 2 1 2 1 1 2 1 2 2 2 1 0 0 2 1 1 2 0 ) , A 2 = ( 2 1 0 1 1 1 2 1 1 0 0 2 1 2 1 0 0 0 2 1 0 2 1 0 1 2 1 0 1 1 2 2 1 1 0 1 2 0 1 1 1 2 2 1 0 0 2 1 2 1 1 2 1 2 2 2 1 0 0 2 1 1 2 0 ) , A 3 = ( 2 1 0 1 1 1 2 1 1 0 0 2 1 2 1 0 0 0 2 1 0 2 1 0 1 2 1 0 1 1 2 2 1 1 0 1 2 0 1 1 1 2 2 1 0 0 2 1 2 1 1 2 1 2 2 2 1 0 0 2 1 1 2 0 ) , A 4 = ( 2 1 0 1 1 1 2 1 1 0 0 2 1 2 1 0 0 0 2 1 0 2 1 0 1 2 1 0 1 1 2 2 1 1 0 1 2 0 1 1 1 2 2 1 0 0 2 1 2 1 1 2 1 2 2 2 1 0 0 2 1 1 2 0 ) , b = ( 8 4 12 8 ) , Q ( X ) = ( 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 ) , C = ( 6 4 4 6 5 1 5 3 4 4 6 5 6 4 3 4 4 6 6 3 4 6 5 5 6 5 3 0 5 5 3 4 5 6 4 5 4 2 3 2 1 4 6 5 2 2 6 4 5 3 5 3 3 6 8 6 3 4 5 4 2 4 6 2 ) .

In the test problems, we use the threshold parameter τ=3, the accuracy parameter ε= 10 7 , and the update parameter θ= 1 2 n with n=8 in the implementation. In this case, the algorithm depicted in Figure 1 is indeed a small-update method. We choose X=E, y=e and S=E as the starting point for our algorithm. Here E and e denote the identity matrix of dimension 8 and the identity vector of dimension 4, respectively. Note that the point is strictly feasible. The initial value of the barrier parameter μ is XS/n with n=8, i.e., 1. We can easily check that Ψ(X,S;μ)=0<τ=3. So these data can, indeed, be used to initialize our algorithm.

An optimal solution of the primal problem is given by

X = ( 0.1788 0.1173 0.0605 0.0769 0.0034 0.0088 0.1511 0.1572 0.1173 0.3874 0.0684 0.1492 0.1188 0.3490 0.2450 0.1194 0.0605 0.0684 0.5162 0.0046 0.1338 0.0294 0.1971 0.0010 0.0769 0.1492 0.0046 0.0789 0.0746 0.1092 0.0864 0.0252 0.0034 0.1188 0.1338 0.0746 0.2652 0.1015 0.0095 0.0208 0.0088 0.3490 0.0294 0.1092 0.1015 0.4175 0.1547 0.2746 0.1511 0.2450 0.1971 0.0864 0.0095 0.1547 0.2667 0.0321 0.1572 0.1194 0.0010 0.0252 0.0208 0.2746 0.0321 0.3780 )

and for the dual problem an optimal solution is given by

y = ( 1.0564 ; 0.8583 ; 1.1417 ; 1.0416 ) , S = ( 0.0660 0.0192 0.0228 0.0212 0.0403 0.0652 0.0033 0.0698 0.0192 0.0206 0.0149 0.0053 0.0207 0.0473 0.0052 0.0361 0.0228 0.0149 0.0366 0.0483 0.0496 0.0579 0.0427 0.0427 0.0212 0.0053 0.0483 0.0789 0.0652 0.0527 0.0681 0.0379 0.0403 0.0207 0.0496 0.0652 0.0691 0.0819 0.0532 0.0645 0.0652 0.0473 0.0579 0.0527 0.0819 0.1340 0.0414 0.1069 0.0033 0.0052 0.0427 0.0681 0.0532 0.0414 0.0705 0.0194 0.0698 0.0361 0.0427 0.0379 0.0645 0.1069 0.0194 0.0945 ) .

The respective objective values are 1 2 XQ(X)+CX=32.959138158 and 1 2 XQ(X)+ b T y=32.959138116, and the duality gap XS is 4.2163× 10 8 , which is less than 10−7.

The numerical results of IPM for the sample problem of CQSDO based on ψ 1 (t) with θ= 1 2 n are summarized in Table 2. For our small-update method, we need 19 main iterations to reach our accuracy. To save space, we show the primal and dual objective value at the moments when the duality gap is reduced again with a factor 10, until the desired accuracy is achieved. The numerical results are summarized in Table 2.

Table 2 Output of IPM for the sample problem of CQSDO based on ψ 1 (t) with θ= 1 2 n

It is clear from Table 2 that the small-update method presented in this paper is not efficient from a practical point of view, just as the feasible IPMs with the best theoretical performance are far from practical. In fact, our algorithm suffers from the usual drawback of primal-dual IPMs that the number of iterations needed for convergence leads to be close to the upper bound, namely, O( n log n ε ). This is due to the small, fixed μ-updates (i.e., μ + =(1θ)μ with θ= 1 2 n for CQSDO). It is desirable to make the largest possible update θ at each iteration, albeit at the cost of extra computation.

In order to reveal the impact of the update parameter θ on the performance of the algorithm, we take the larger possible update parameter θ=0.9 in the implementation. In this case, the algorithm depicted in Figure 1 is indeed a large-update method. We only need 14 main iterations to reach our accuracy. The outputs of IPMs for the sample problem of CQSDO based on ψ 1 (t) with θ=0.9 are shown in Table 3.

Table 3 Output of IPM for the sample problem of CQSDO based on ψ 1 (t) with θ=0.9

It is clear from Table 3 that the iteration number of the algorithm depend on the update parameter θ. A larger value of the update parameter θ gives rise to better results. However, it should be pointed out that the update parameter θ would be too large to solve the problem in the computational procedure. In the solution procedure, we might use the dynamic updates of the barrier parameter, as described in [1]. This may significantly enhance the practical performance of the proposed algorithm.

7 Conclusions and remarks

In this paper, we presented a unified approach and comprehensive treatment of primal-dual IPMs for CQSCO based on the entire class of the eligible kernel functions. For large-update methods the best iteration bound is O( r logrlog r ε ) and for small-update methods all iteration bounds have the same order of magnitude, namely, O( r log r ε ), which almost closes the gap between the iteration bounds for large- and small-update methods. Some preliminary numerical results are provided to demonstrate the computational performance of the algorithm depicted in Figure 1.

The paper generalizes results obtained in the following papers, [6] where Bai et al. consider kernel-based primal-dual IPMs for LO, and [11, 16, 30] and [23] where Bai et al., El Ghami et al., Wang et al. and Vieira consider the same type of IPMs for SOCO, SDO, CQSDO and SCO, respectively. It turns out that the iterations bounds are the same as for the non-negative orthant except that n is replaced by r, the rank of the EJA. However, the analysis of the proposed algorithm is far more complicated in [6, 11, 16, 23]. This is due to the following fact that we lose the orthogonality of search directions that exist in LO, SOCO, SDO, and SCO cases does not hold for CQSCO.

Some interesting topics for further research remain. First, the search direction used in this paper is based on the NT-symmetrization scheme and it is natural to ask if other symmetrization schemes can be used. Second, although we present a simple examples to show the computational performance of the proposed algorithm, more numerical experiments are desired to compare the behavior of our algorithm with other existing IPMs. Finally, the extension to general nonlinear optimization over symmetric cone deserves to be investigated.

Endnote

a It may be worth mentioning that if we use the kernel function of the classical logarithmic barrier function, i.e., ψ(t)= 1 2 ( t 2 1)logt, then ψ (t)=t t 1 , whence ψ (v)= v 1 v, and hence system (7) then coincides with the classical system (6).

References

  1. Roos C, Terlaky T, Vial JP: Theory and Algorithms for Linear Optimization. An Interior-Point Approach. Wiley, Chichester; 1997.

    MATH  Google Scholar 

  2. Wright SJ: Primal-Dual Interior-Point Methods. SIAM, Philadelphia; 1997.

    Book  MATH  Google Scholar 

  3. Ye Y: Interior Point Algorithms: Theory and Analysis. Wiley, Chichester; 1997.

    Book  MATH  Google Scholar 

  4. Anjos MF, Lasserre JB International Series in Operational Research and Management Science 166. In Handbook on Semidefinite, Conic and Polynomial Optimization: Theory, Algorithms, Software and Applications. Springer, New York; 2012.

    Chapter  Google Scholar 

  5. Peng J, Roos C, Terlaky T: Self-regular functions and new search directions for linear and semidefinite optimization. Math. Program. 2002,93(1):129–171. 10.1007/s101070200296

    MathSciNet  Article  MATH  Google Scholar 

  6. Bai YQ, El Ghami M, Roos C: A comparative study of kernel functions for primal-dual interior-point algorithms in linear optimization. SIAM J. Optim. 2004,15(1):101–128. 10.1137/S1052623403423114

    MathSciNet  Article  MATH  Google Scholar 

  7. Amini K, Haseli A: A new proximity function generating the best known iteration bounds for both large-update and small-update interior-point methods. ANZIAM J. 2007,49(2):259–270. 10.1017/S1446181100012827

    MathSciNet  Article  MATH  Google Scholar 

  8. Bai YQ, Lesaja G, Roos C, Wang GQ, El Ghami M: A class of large-update and small-update primal-dual interior-point algorithms for linear optimization. J. Optim. Theory Appl. 2008,138(3):341–359. 10.1007/s10957-008-9389-z

    MathSciNet  Article  MATH  Google Scholar 

  9. Bai YQ, Roos C: A polynomial-time algorithm for linear optimization based a new simple kernel function. Optim. Methods Softw. 2003,18(6):631–646. 10.1080/10556780310001639735

    MathSciNet  Article  MATH  Google Scholar 

  10. Bai YQ, Roos C, El Ghami M: A primal-dual interior-point method for linear optimization based a new proximity function. Optim. Methods Softw. 2002,17(6):985–1008. 10.1080/1055678021000090024

    MathSciNet  Article  MATH  Google Scholar 

  11. Bai YQ, Wang GQ, Roos C: Primal-dual interior-point algorithms for second-order cone optimization based on kernel functions. Nonlinear Anal. 2009,70(10):3584–3602. 10.1016/j.na.2008.07.016

    MathSciNet  Article  MATH  Google Scholar 

  12. Cai XZ, Wang GQ, Zhang ZH: Complexity analysis and numerical implementation of primal-dual interior-point methods for convex quadratic optimization based on a finite barrier. Numer. Algorithms 2013,62(2):289–306. 10.1007/s11075-012-9581-y

    MathSciNet  Article  MATH  Google Scholar 

  13. Chi XN, Liu SY: An infeasible-interior-point predictor-corrector algorithm for the second-order cone program. Acta Math. Sci. 2008,28(3):551–559. 10.1016/S0252-9602(08)60058-2

    MathSciNet  Article  MATH  Google Scholar 

  14. Cho GM: Primal-dual interior-point method based on a new barrier function. J. Nonlinear Convex Anal. 2011,12(3):611–624.

    MathSciNet  MATH  Google Scholar 

  15. Cho GM: An interior-point algorithm for linear optimization based on a new barrier function. Appl. Math. Comput. 2011,218(2):386–395. 10.1016/j.amc.2011.05.075

    MathSciNet  Article  MATH  Google Scholar 

  16. El Ghami M, Bai YQ, Roos C: Kernel-function based algorithms for semidefinite optimization. RAIRO Oper. Res. 2009,43(2):189–199. 10.1051/ro/2009011

    MathSciNet  Article  MATH  Google Scholar 

  17. El Ghami M, Guennounb ZA, Bouali S, Steihaug T: Interior-point methods for linear optimization based on a kernel function with a trigonometric barrier term. J. Comput. Appl. Math. 2012,236(15):3613–3623. 10.1016/j.cam.2011.05.036

    MathSciNet  Article  MATH  Google Scholar 

  18. El Ghami M, Ivanov ID, Roos C, Steihag T: A polynomial-time algorithm for LO based on generalized logarithmic barrier functions. Int. J. Appl. Math. 2008,21(1):99–115.

    MathSciNet  MATH  Google Scholar 

  19. El Ghami M, Roos C: Generic primal-dual interior point methods based on a new kernel function. RAIRO Oper. Res. 2008,42(2):199–213. 10.1051/ro:2008009

    MathSciNet  Article  MATH  Google Scholar 

  20. Gu G, Zangiabadi M, Roos C: Full Nesterov-Todd step infeasible interior-point method for symmetric optimization. Eur. J. Oper. Res. 2011,214(3):473–484. 10.1016/j.ejor.2011.02.022

    MathSciNet  Article  MATH  Google Scholar 

  21. Peyghamia MR, Hafshejani SF, Shirvani L: Complexity of interior-point methods for linear optimization based on a new trigonometric kernel function. J. Comput. Appl. Math. 2014,255(1):74–85.

    MathSciNet  Article  MATH  Google Scholar 

  22. Tang JY, He GP, Fang L: A new kernel function and its related properties for second-order cone optimization. Pac. J. Optim. 2012,8(2):321–346.

    MathSciNet  MATH  Google Scholar 

  23. Vieira, MVC: Jordan algebraic approach to symmetric optimization. PhD thesis, Electrical Engineering, Mathematics and Computer Science, Delft University of Technology, The Netherlands (2007)

  24. Wang GQ, Bai YQ, Roos C: Primal-dual interior-point algorithms for semidefinite optimization based on a simple kernel function. J. Math. Model. Algorithms 2005,4(4):409–433. 10.1007/s10852-005-3561-3

    MathSciNet  Article  MATH  Google Scholar 

  25. Wang GQ, Bai YQ: A new primal-dual path-following interior-point algorithm for semidefinite optimization. J. Math. Anal. Appl. 2009,353(1):339–349. 10.1016/j.jmaa.2008.12.016

    MathSciNet  Article  MATH  Google Scholar 

  26. Wang GQ, Bai YQ: A class of polynomial interior-point algorithms for the Cartesian P-matrix linear complementarity problem over symmetric cones. J. Optim. Theory Appl. 2012,152(3):739–772. 10.1007/s10957-011-9938-8

    MathSciNet  Article  MATH  Google Scholar 

  27. Wang GQ, Bai YQ: A new full Nesterov-Todd step primal-dual path-following interior-point algorithm for symmetric optimization. J. Optim. Theory Appl. 2012,154(3):966–985. 10.1007/s10957-012-0013-x

    MathSciNet  Article  MATH  Google Scholar 

  28. Toh KC: An inexact primal-dual path following algorithm for convex quadratic SDP. Math. Program. 2008,112(1):221–254.

    MathSciNet  Article  MATH  Google Scholar 

  29. Li L, Toh KC: A polynomial-time inexact interior-point method for convex quadratic symmetric cone programming. J. Math-for-Ind. 2010, 2B: 199–212.

    MathSciNet  MATH  Google Scholar 

  30. Wang GQ, Zhu DT: A unified kernel function approach to primal-dual interior-point algorithms for convex quadratic SDO. Numer. Algorithms 2011,57(4):537–558. 10.1007/s11075-010-9444-3

    MathSciNet  Article  MATH  Google Scholar 

  31. Wang GQ, Zhang ZH, Zhu DT: On extending primal-dual interior-point method for linear optimization to convex quadratic symmetric cone optimization. Numer. Funct. Anal. Optim. 2012,34(5):576–603.

    MathSciNet  Article  MATH  Google Scholar 

  32. Wang GQ, Yu CJ, Teo KL: A full Nesterov-Todd step feasible interior-point method for convex quadratic optimization over symmetric cone. Appl. Math. Comput. 2013,221(15):329–343.

    MathSciNet  Article  MATH  Google Scholar 

  33. Bai YQ, Zhang LP: A full-Newton step interior-point algorithm for symmetric cone convex quadratic optimization. J. Ind. Manag. Optim. 2011,7(4):891–906.

    MathSciNet  Article  MATH  Google Scholar 

  34. Achache M: A full Nesterov-Todd step feasible primal-dual interior point algorithm for convex quadratic semi-definite optimization. Appl. Math. Comput. 2014,231(1):581–590.

    MathSciNet  Article  Google Scholar 

  35. Faybusovich L: Euclidean Jordan algebras and interior-point algorithms. Positivity 1997,1(4):331–357. 10.1023/A:1009701824047

    MathSciNet  Article  MATH  Google Scholar 

  36. Schmieta SH, Alizadeh F: Extension of primal-dual interior-point algorithms to symmetric cones. Math. Program. 2003,96(3):409–438. 10.1007/s10107-003-0380-z

    MathSciNet  Article  MATH  Google Scholar 

  37. Nesterov YE, Todd MJ: Self-scaled barriers and interior-point methods for convex programming. Math. Oper. Res. 1997,22(1):1–42. 10.1287/moor.22.1.1

    MathSciNet  Article  MATH  Google Scholar 

  38. Nesterov YE, Todd MJ: Primal-dual interior-point methods for self-scaled cones. SIAM J. Optim. 1998,8(2):324–364. 10.1137/S1052623495290209

    MathSciNet  Article  MATH  Google Scholar 

  39. Faybusovich L: A Jordan-algebraic approach to potential-reduction algorithms. Math. Z. 2002,239(1):117–129. 10.1007/s002090100286

    MathSciNet  Article  MATH  Google Scholar 

  40. Faraut J, Korányi A: Analysis on Symmetric Cone. Oxford University Press, New York; 1994.

    MATH  Google Scholar 

  41. Baes M: Convexity and differentiability properties of spectral functions and spectral mappings on Euclidean Jordan algebras. Linear Algebra Appl. 2007,422(2–3):664–700. 10.1016/j.laa.2006.11.025

    MathSciNet  Article  MATH  Google Scholar 

  42. Korányi A: Monotone functions on formally real Jordan algebras. Math. Ann. 1984,269(1):73–76. 10.1007/BF01455996

    MathSciNet  Article  MATH  Google Scholar 

Download references

Acknowledgements

This work was supported by Shanghai Natural Science Fund Project (14ZR1418900), National Natural Science Foundation of China (No. 11001169), China Postdoctoral Science Foundation funded project (Nos. 2012T50427, 20100480604) and Natural Science Foundation of Shanghai University of Engineering Science (No. 2014YYYF01).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guoqiang Wang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors carried out the proof. All authors conceived of the study, and participated in its design and coordination. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

Open Access  This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Cai, X., Wu, L., Yue, Y. et al. Kernel-function-based primal-dual interior-point methods for convex quadratic optimization over symmetric cone. J Inequal Appl 2014, 308 (2014). https://doi.org/10.1186/1029-242X-2014-308

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2014-308

Keywords

  • interior-point methods
  • convex quadratic optimization
  • kernel function
  • Euclidean Jordan algebras
  • large- and small-update methods
  • polynomial complexity