- Research
- Open access
- Published:

# New complexity analysis of interior-point methods for the Cartesian {P}_{\ast}(\kappa )-SCLCP

*Journal of Inequalities and Applications*
**volume 2013**, Article number: 285 (2013)

## Abstract

In this paper, we give a unified analysis for both large- and small-update interior-point methods for the Cartesian {P}_{\ast}(\kappa )-linear complementarity problem over symmetric cones based on a finite barrier. The proposed finite barrier is used both for determining the search directions and for measuring the distance between the given iterate and the *μ*-center for the algorithm. The symmetry of the resulting search directions is forced by using the Nesterov-Todd scaling scheme. By means of Euclidean Jordan algebras, together with the feature of the finite kernel function, we derive the iteration bounds that match the currently best known iteration bounds for large- and small-update methods. Furthermore, our algorithm and its polynomial iteration complexity analysis provide a unified treatment for a class of primal-dual interior-point methods and their complexity analysis.

**MSC:**90C33, 90C51.

## 1 Introduction

Let (\mathcal{V},\circ ) be an *n*-dimensional Euclidean Jordan algebra with rank *r* equipped with the standard inner product \u3008x,s\u3009=tr(x\circ s). Let \mathcal{K} be the corresponding symmetric cone. For a linear transformation \mathcal{A}:\mathcal{V}\to \mathcal{V} and a q\in \mathcal{V}, the linear complementarity problem over symmetric cones, denoted by SCLCP, is to find x,s\in \mathcal{V} such that

Note that \u3008x,s\u3009=0\iff x\circ s=0 (Lemma 2.2 in [1]).

The SCLCP is a wide class of problems that contains linear complementarity problem (LCP), second-order cone linear complementarity problem (SOCLCP) and semidefinite linear complementarity problem (SDLCP) as special cases. For an overview of these and related results, we refer to the survey paper [2] and references within.

There are many solution approaches for SCLCP. Among them, the interior-point methods (IPMs) gain much more attention. Faybusovich [3] made the first attempt to generalize IPMs to symmetric optimization (SO) and SCLCP using the ‘machinery’ of Euclidean Jordan algebras. Potra [4] proposed an infeasible corrector-predictor IPM for the monotone SCLCP. Yoshise [5] proposed the homogeneous model for the monotone nonlinear complementarity problems (NCP) over symmetric cones (SCNCP) and analyzed IPM to solve it.

Let \mathcal{V} be a Cartesian product of a finite number of simple Euclidean Jordan algebras ({\mathcal{V}}_{j},\circ ) with dimensions {n}_{j} and ranks {r}_{j} for j=1,\dots ,N, that is, \mathcal{V}={\mathcal{V}}_{1}\times \cdots \times {\mathcal{V}}_{N} with its cone of squares \mathcal{K}={\mathcal{K}}_{1}\times \cdots \times {\mathcal{K}}_{N}, where {\mathcal{K}}_{j} are the corresponding cones of squares of {\mathcal{V}}_{j} for j=1,\dots ,N. The dimension of \mathcal{V} is n={\sum}_{j=1}^{N}{n}_{j} and the rank is r={\sum}_{j=1}^{N}{r}_{j}. Recall that a Euclidean Jordan algebra is said to be simple if it cannot be represented as the orthogonal direct sum of two Euclidean Jordan algebras.

We call SCLCP the Cartesian {P}_{\ast}(\kappa )-SCLCP if the linear transformation \mathcal{A} has the Cartesian {P}_{\ast}(\kappa )-property for some nonnegative constant *κ*, *i.e.*,

where {\mathcal{I}}_{+}(x)=\{j:\u3008{x}^{(j)},{[\mathcal{A}(x)]}^{(j)}\u3009>0\} and {\mathcal{I}}_{-}(x)=\{j:\u3008{x}^{(j)},{[\mathcal{A}(x)]}^{(j)}\u3009<0\} are two index sets. It is closely related to the Cartesian {P}_{0}- and *P*-properties which were first introduced by Chen and Qi [6] over the space of symmetric matrices, and later extended by Pan and Chen [7] and Luo and Xiu [8] to the space of second-order cones and the general Euclidean Jordan algebras, respectively.

The Cartesian {P}_{\ast}(\kappa )-SCLCP is indeed the generalization of {P}_{\ast}(\kappa )-LCP, which was first introduced by Kojima *et al.* [9]. They established the existence of the central path and designed and analyzed IPMs for solving {P}_{\ast}(\kappa )-LCP. The theoretical importance of this class of LCPs lays in the fact that this is the largest class for which polynomial convergence of IPMs can be proved without additional conditions (such as boundedness of the level sets).

Luo and Xiu [8] were the first to establish a theoretical framework of path-following interior-point algorithms for the Cartesian {P}_{\ast}(\kappa )-SCLCP and to prove the global convergence and the iteration complexities of the proposed algorithms. Wang and Bai [10] analyzed a class of IPMs for the Cartesian {P}_{\ast}(\kappa )-SCLCP based on a parametric kernel function different from the logarithmic kernel function. Lesaja *et al.* [11] gave a unified analysis of kernel-based IPMs for the Cartesian {P}_{\ast}(\kappa )-SCLCP and derived the currently best known iteration bounds for large- and small-update methods for some special eligible kernel functions. Wang and Lesaja [12] generalize Roos’s full-Newton step feasible IPM for LO [13] and Gu *et al.* extension to SO [14], to the Cartesian {P}_{\ast}(\kappa )-SCLCP. Liu *et al.* [15] proposed smoothing Newton methods for the Cartesian {P}_{0}- and *P*-SCLCPs. Huang and Lu [16] presented a globally convergent smoothing method with a linear rate of convergence for the Cartesian {P}_{\ast}(\kappa )-SCLCP.

Bai *et al.* [17] introduced a finite kernel function as follows:

which is not a kernel function in the usual sense (see, *e.g.*, [18, 19]). It has a finite value at the boundary of the feasible region, *i.e.*,

However, the iteration bound of a large-update method based on this kernel function is shown to be O(\sqrt{n}lognlog\frac{n}{\epsilon}). Recently, Ghami *et al.* [20] studied the generalization of the finite kernel function \psi (t) as follows:

This parametric kernel function also has finite value at the boundary of the feasible region and its growth terms are between linear and quadratic. They proposed a class of primal-dual interior-point algorithms for LO and the extension to SDO [21] based on the parametric kernel function {\psi}_{p,\sigma}(t), respectively. Meanwhile, the results for LO in [17, 20] were extended to {P}_{\ast}(\kappa )-LCP by Wang and Bai in [22], again matching the best known iteration bounds for LO with the addition 1+2\kappa. An interesting question here is whether we can directly extend the interior-point algorithms for LO in [17] to the Cartesian {P}_{\ast}(\kappa )-SCLCP. As we will see later, LO cannot be trivially generalized to the Cartesian {P}_{\ast}(\kappa )-SCLCP context. The analysis of the algorithm proposed in this paper is more complicated than in the LO case mainly due to the fact that the search directions are no longer orthogonal.

In this paper, we consider a generalization of kernel-based IPMs discussed in the paper [17] to the Cartesian {P}_{\ast}(\kappa )-SCLCP. The paper also extends the results of the paper [22] where we consider the same type of IPMs for {P}_{\ast}(\kappa )-LCP, however, only over the nonnegative orthant. Our goal is to provide a unified analysis for both large- and small-update IPMs for the Cartesian {P}_{\ast}(\kappa )-SCLCP based on the finite barrier. Although the proposed algorithm is an exact extension of the algorithms for LO and {P}_{\ast}(\kappa )-LCP, the Cartesian {P}_{\ast}(\kappa )-property makes the analysis of the method far more complicated. Furthermore, we loose the orthogonality of the scaled search directions in the Cartesian {P}_{\ast}(\kappa )-SCLCP case. This also yields many difficulties in the analysis of the algorithm for the Cartesian {P}_{\ast}(\kappa )-SCLCP. However, we manage to prove the same good characteristics as in the LO case. The obtained complexity results match the best known iteration bounds known for large- and small-update methods, namely O((1+2\kappa )\sqrt{r}logrlog\frac{r}{\epsilon}) and O((1+2\kappa )\sqrt{r}log\frac{r}{\epsilon}), respectively. The order of the iteration bounds almost coincides with the bounds derived for LO in [17], except that the iteration bounds in the Cartesian {P}_{\ast}(\kappa )-SCLCP case are multiplied by the factor (1+2\kappa ).

The paper is organized as follows. In Section 2, we briefly describe some concepts, properties, and results from Euclidean Jordan algebras. In Section 3, we provide and develop some useful properties of the finite kernel function \psi (t) and the corresponding barrier function \mathrm{\Psi}(v). In Section 4, we mainly study primal-dual IPMs for the Cartesian {P}_{\ast}(\kappa )-SCLCP based on the finite kernel function. The analysis and complexity bounds of the algorithm are presented in Sections 5 and 6, respectively. Finally, some conclusions and remarks follow in Section 7.

Notations used throughout the paper are as follows. {\mathbf{R}}^{n}, {\mathbf{R}}_{+}^{n}, and {\mathbf{R}}_{++}^{n} denote the set of all vectors (with *n* components), the set of nonnegative vectors, and the set of positive vectors, respectively. The largest eigenvalue of *x* and the smallest eigenvalue of *x* are defined by {\lambda}_{\mathrm{max}}(x) and {\lambda}_{\mathrm{min}}(x). The Löwner partial ordering ‘{\u2ab0}_{\mathcal{K}}’ of \mathcal{V} defined by a symmetric cone \mathcal{K} is defined by x{\u2ab0}_{\mathcal{K}}s if x-s\in \mathcal{K}. The interior of \mathcal{K} is denoted as int\mathcal{K}, and we write x{\succ}_{\mathcal{K}}s if x-s\in int\mathcal{K}. Finally, if g(x)\ge 0 is a real-valued function of a real nonnegative variable, the notation g(x)=O(x) means that g(x)\le \overline{c}x for some positive constant \overline{c}, and g(x)=\mathrm{\Theta}(x) that {c}_{1}x\le g(x)\le {c}_{2}x for two positive constants {c}_{1} and {c}_{2}.

## 2 Preliminaries

In what follows, we assume that the reader is familiar with the basic concepts of Euclidean Jordan algebras and symmetric cones. The detailed information can be found in the monograph of Faraut and Korányi [23] and in [1, 14, 24–29] as it relates to optimization.

The bilinear form on \mathcal{V} is defined as

where x={({x}^{(1)},\dots ,{x}^{(N)})}^{T} and s={({s}^{(1)},\dots ,{s}^{(N)})}^{T} in \mathcal{V} with {x}^{(j)},{s}^{(j)}\in {\mathcal{V}}_{j}, j=1,\dots ,N. If {e}^{(j)}\in {\mathcal{V}}_{j} is the identity element in the Euclidean Jordan algebra {\mathcal{V}}_{j}, then the vector

is the identity element in \mathcal{V}.

For each {x}^{(j)}\in {\mathcal{V}}_{j} with j=1,\dots ,N, the Lyapunov transformation and the quadratic representation of {\mathcal{V}}_{j} are given by

where L{({x}^{(j)})}^{2}=L({x}^{(j)})L({x}^{(j)}). They can be adjusted to the Cartesian product structure \mathcal{V} as follows:

The spectral decomposition of {x}^{(j)}\in {\mathcal{V}}_{j} with respect to the Jordan frame \{{c}_{1}^{(j)},\dots ,{c}_{{r}_{j}}^{(j)}\} is given by

where {\lambda}_{1}({x}^{(j)}),\dots ,{\lambda}_{{r}_{j}}({x}^{(j)}) are the corresponding eigenvalues. The spectral decomposition of x\in \mathcal{V} can be defined straightforwardly by using the spectral decomposition of components {x}^{(j)}\in {\mathcal{V}}_{j} as follows:

It enables us to extend the definition of any real-valued, continuous univariate function to elements of a Euclidean Jordan algebra, using the eigenvalues. In particular this holds for the finite kernel function.

Let x\in \mathcal{V} with the spectral decomposition as defined (11). The vector-valued function \psi (x) is defined by

where

Furthermore, if \psi (t) is differentiable, the derivative {\psi}^{\prime}(t) exists, and we also have a vector-valued function {\psi}^{\prime}(x), namely

where

It should be noted that {\psi}^{\prime}(x), which does not mean that the derivative of the vector-valued function \psi (x) defined by (12) is just a vector-valued function induced by the derivative {\psi}^{\prime}(t) of the function \psi (t).

The Peirce decomposition of {x}^{(j)}\in {\mathcal{V}}_{j} with respect to the Jordan frame \{{c}_{1}^{(j)},\dots ,{c}_{{r}_{j}}^{(j)}\} is given by

with {x}_{i}^{(j)}\in \mathbf{R}, i=1,\dots ,{r}_{j} and {x}_{i{m}_{j}}^{(j)}\in {\mathcal{V}}_{i{m}_{j}}^{(j)}, 1\le i<{m}_{j}\le {r}_{j}. The {\mathcal{V}}_{i{m}_{j}}^{(j)} for 1\le i<{m}_{j}\le {r}_{j} are the Peirce subspaces of {\mathcal{V}}_{j} induced by the Jordan frame {c}_{1}^{(j)},\dots ,{c}_{{r}_{j}}^{(j)}. The Peirce decomposition of x\in \mathcal{V} can be defined straightforwardly by using the Peirce decomposition of components {x}^{(j)}\in {\mathcal{V}}_{j} as follows:

The canonical inner product is defined as

We recall the following definitions:

Then, in \mathcal{V} we have

Furthermore, we define

and

It follows from (21), (22), and (20) that

Furthermore, we have the following important result.

**Lemma 2.1** (Lemma 14 in [28])

*Let* x,s\in \mathcal{V}. *Then*

Before ending this section, we need to consider the separable spectral functions induced by the univariate functions. Let f:D\to \mathbf{R} be a univariate function on the open set D\subseteq \mathbf{R} that is differentiable or even continuously differentiable if necessary. Let {x}^{(j)}={\sum}_{i=1}^{{r}_{j}}{\lambda}_{i}({x}^{(j)}){c}_{i}^{(j)} be the spectral decomposition of {x}^{(j)}\in {\mathcal{V}}_{j} with respect to the Jordan frame {c}_{1}^{(j)},\dots ,{c}_{{r}_{j}}^{(j)} for each *j*, j=1,\dots ,N. Then we define the real -valued separable spectral function F({x}^{(j)}):{\mathcal{V}}_{j}\to \mathbf{R} and the vector-valued separable spectral function G:{\mathcal{V}}_{j}\to {\mathcal{V}}_{j} by

respectively. The first derivative {D}_{x}F({x}^{(j)}) of the function F({x}^{(j)}) and the first derivative {D}_{x}G({x}^{(j)}) of the function G({x}^{(j)}) are given by

and

respectively, where {\lambda}_{i}^{(j)}={\lambda}_{i}({x}^{(j)}), {\lambda}_{{m}_{j}}^{(j)}={\lambda}_{{m}_{j}}({x}^{(j)}), and {\mathcal{V}}_{i{m}_{j}}^{(j)}, 1\le i\le {m}_{j}\le r, are orthogonal projection operators that appear in the Peirce decomposition of {\mathcal{V}}_{j} with respect to the Jordan frame {c}_{1}^{(j)},\dots ,{c}_{{r}_{j}}^{(j)}.

The above results, as well as a more general treatment of spectral functions, their derivatives and various properties can be found in [24, 27].

Now, the separable spectral functions can be adjusted to the Cartesian product structure \mathcal{V} as follows:

It follows directly from (25) and (26) that

## 3 Properties of the finite kernel (barrier) function

In this section, we provide and develop some useful properties of the finite kernel function and the corresponding barrier function that are needed in the analysis of the algorithm. For ease of reference, we give the first three derivatives of \psi (t) with respect to *t* as follows:

We can conclude that

It follows from (30) that \psi (t) is strictly convex and {\psi}^{\u2033}(t) is monotonically decreasing in t\in (0,+\mathrm{\infty}).

The property described below in Lemma 3.1 is exponential convexity, which has been proven to be very useful in the analysis of kernel-based primal-dual IPMs (see, *e.g.*, [18, 19]).

**Lemma 3.1** (Lemma 2.4 in [17])

*If* {t}_{1}\ge \frac{1}{\sigma} *and* {t}_{2}\ge \frac{1}{\sigma}, *then*

Note that \psi (t) is exponentially convex, whenever t\ge \frac{1}{\sigma}. The following lemma makes clear that when *v* belongs to the level set \{v:\mathrm{\Psi}(v)\le L\}, for some given L\ge 8, the exponential convexity is guaranteed and it is proved that the value of *σ* is large enough.

**Lemma 3.2** (Lemma 2.5 in [17])

*Let* L\ge 8 *and* \mathrm{\Psi}(v)\le L. *If* \sigma \ge 1+2log(1+L), *then* {\lambda}_{\mathrm{min}}(v)\ge \frac{3}{2\sigma}.

Corresponding to the finite kernel function \psi (t) defined by (3), we define the barrier function on int\mathcal{K} as follows:

It follows immediately from (12) and (20) that

According to the properties of the finite kernel function \psi (t), we can conclude that \mathrm{\Psi}(v) is nonnegative and strictly convex with respect to v{\succ}_{\mathcal{K}}0 and vanishes at its global minimal point v=e, *i.e.*,

Furthermore, we have, by (28),

This means that the derivative of the barrier function \mathrm{\Psi}(v) in essence coincides with the vector-valued function {\psi}^{\prime}(v) defined by (14) and (15).

As the consequence of Lemma 3.1, we have the following theorem, which is indeed a slight modification of Theorem 4.3.2 in [29]. Thus, we omit its proof.

**Theorem 3.3** *Let* x,s\in int\mathcal{K}. *If* {\lambda}_{\mathrm{min}}(x)\ge \frac{1}{\sigma} *and* {\lambda}_{\mathrm{min}}(s)\ge \frac{1}{\sigma}, *then*

**Lemma 3.4** *If* t\ge 1, *then*

*Proof* From Taylor’s theorem and the fact that {\psi}^{\u2033}(1)=1+\sigma, the inequality is straightforward. □

**Lemma 3.5** *If* t\ge 1, *then*

*Proof* Defining f(t):=t{\psi}^{\prime}(t)-\psi (t), we have f(1)=0 and

This implies the desired result. □

The following lemma can be directly obtained from Lemma 2.5 in [22], which provides the lower and upper bounds of the inverse function of the finite kernel function \psi (t) for t\ge 1.

**Lemma 3.6** *Let* \varrho :[0,\mathrm{\infty})\to [1,\mathrm{\infty}) *be the inverse function of the finite kernel function* \psi (t) *for* t\ge 1. *If* \sigma \ge 1, *then*

*If* \sigma \ge 2, *then*

For the analysis of the algorithm, we define the norm-based proximity measure \delta (v) as follows:

It follows from (14) and (20) that

We can conclude that \delta (v)\ge 0 and \delta (v)=0 if and only if \mathrm{\Psi}(v)=0.

Clearly, \delta (v) and \mathrm{\Psi}(v) depend only on the eigenvalues {\lambda}_{i}({v}^{(j)}) of the symmetric cone {v}^{(j)} for each *j*, j=1,\dots ,N. The following theorem gives a lower bound on \delta (v) in terms of \mathrm{\Psi}(v), which is precisely the same as its LO counterpart \delta (v) (*cf.* Theorem 4.8 in [18]).

**Theorem 3.7** *If* v\in int\mathcal{K}, *then*

**Corollary 3.8** *If* v\in int\mathcal{K} *and* \mathrm{\Psi}(v)\ge 1, *then*

*Proof* From (34) and the fact that \mathrm{\Psi}(v)\ge 1 and \sigma \ge 1, we have

Thus, we have, by Theorem 3.7 and Lemma 3.5,

This completes the proof of the corollary. □

In what follows, we consider the derivatives of the function \mathrm{\Psi}(x(t)) with respect to *t*, where x(t)={x}_{0}+tu\in int\mathcal{K} with t\in \mathbf{R} and u\in \mathcal{V}. For more details, we refer to [29].

It follows from (11) and (17) that the spectral decomposition of x(t) with respect to the Jordan frame \{{c}_{1}^{(1)},\dots ,{c}_{{r}_{1}}^{(1)},\dots ,{c}_{1}^{(N)},\dots ,{c}_{{r}_{N}}^{(N)}\} can be defined by

and the Pierce decomposition of *u* can be defined by

From (28), after some elementary reductions, we can derive the first two derivatives of the general function \mathrm{\Psi}(x(t)) with respect to *t* as follows:

and

where {\lambda}_{i}^{(j)}={\lambda}_{i}(x{(t)}^{(j)}) and {\lambda}_{{m}_{j}}^{(j)}={\lambda}_{{m}_{j}}(x{(t)}^{(j)}).

Note that {\psi}^{\u2033}(t) is monotonically decreasing in t\in (0,+\mathrm{\infty}). Under the assumption that i<{m}_{j} implies {\lambda}_{i}(x(t))\ge {\lambda}_{{m}_{j}}(x(t)), we can conclude that

which bounds the second-order derivative of \mathrm{\Psi}(x(t)) with respect to *t*.

## 4 Interior-point algorithm for the Cartesian {P}_{\ast}(\kappa )-SCLCP

In this section, we first introduce the central path for the Cartesian {P}_{\ast}(\kappa )-SCLCP. Next, we mainly derive the new search directions induced by the finite kernel function \psi (t). Finally, we present the generic polynomial interior-point algorithm for the Cartesian {P}_{\ast}(\kappa )-SCLCP.

### 4.1 The central path for the Cartesian {P}_{\ast}(\kappa )-SCLCP

Throughout the paper, we assume that the Cartesian {P}_{\ast}(\kappa )-SCLCP satisfies the interior-point condition (IPC), *i.e.*, there exists ({x}^{0}{\succ}_{\mathcal{K}}0,{s}^{0}{\succ}_{\mathcal{K}}0) such that {s}^{0}=\mathcal{A}({x}^{0})+q. For this and other properties of the Cartesian {P}_{\ast}(\kappa )-SCLCP, we refer to [8]. Under the IPC holds, by relaxing the complementarity slackness x\diamond s=0 with x\diamond s=\mu e, we obtain the following system:

where \mu >0 is a parameter. The parameterized system (43) has a unique solution for each \mu >0. This solution is denoted as (x(\mu ),s(\mu )) and we call (x(\mu ),s(\mu )) the *μ*-center of the Cartesian {P}_{\ast}(\kappa )-SCLCP. The set of *μ*-centers (with *μ* running through all positive real numbers) gives a homotopy path, which is called *the central path* of the Cartesian {P}_{\ast}(\kappa )-SCLCP. If \mu \to 0, then the limit of the central path exists and since the limit points satisfy the complementarity condition x\diamond s=0, the limit yields a solution for the Cartesian {P}_{\ast}(\kappa )-SCLCP (see, *e.g.*, [8]).

### 4.2 The new search directions for the Cartesian {P}_{\ast}(\kappa )-SCLCP

To obtain the search directions for the Cartesian {P}_{\ast}(\kappa )-SCLCP, the usual approach is to use Newton’s method and to linearize the system (43). In what follows, we briefly outline the details.

For any strictly feasible x{\succ}_{\mathcal{K}}0 and s{\succ}_{\mathcal{K}}0, we want to find displacements Δ*x* and Δ*s* such that

Neglecting the term \mathrm{\Delta}x\diamond \mathrm{\Delta}s on the left-hand side expression of the second equation, we obtain the following Newton system for the search directions Δ*x* and Δ*s*:

Due to the fact that *x* and *s* do not operator commute in general, *i.e.*, L(x)L(s)\ne L(s)L(x), this system does not always have a unique solution. It is well known that this difficulty can be solved by applying a scaling scheme. This goes as follows.

**Lemma 4.1** (Lemma 28 in [28])

*Let* u\in int\mathcal{K}. *Then*

Now we replace the second equation of the system (44) by

Applying Newton’s method again, and neglecting the term P(u)\mathrm{\Delta}x\diamond P({u}^{-1})\mathrm{\Delta}s, we get

By choosing *u* appropriately, this system can be used to define the commutative class of search directions (see, *e.g.*, [28]). In the literature the following three choices are well known: u={s}^{1/2}, u={x}^{1/2}, and u={w}^{-1/2}, where *w* is the NT-scaling point of *x* and *s*. The first two choices lead to the so-called *xs*-direction and *sx*-direction, respectively. In this paper, we consider the third choice that is called NT-scaling scheme and the resulting direction is called NT search direction. This scaling scheme was first proposed by Nesterov and Todd for self-scaled cones [30, 31] and then adapted by Faybusovich [1, 26] for symmetric cones.

**Lemma 4.2** (Lemma 3.2 in [26])

*Let* x,s\in int\mathcal{K}. *Then there exists a unique scaling point* w\in int\mathcal{K} *such that*

*Moreover*,

As a consequence of the above lemma, there exists \tilde{v}\in int\mathcal{K} such that

Note that P{(w)}^{\frac{1}{2}} and its inverse P{(w)}^{-\frac{1}{2}} are automorphisms of \mathcal{K} (see, *e.g.*, [14, 29]). This leads to the definition of the following *variance vector*:

Furthermore, we define

The transformation \overline{\mathcal{A}} also has the Cartesian {P}_{\ast}(\kappa )-property (*cf.* Proposition 3.4 in [8]).

Using (49) and (50), after some elementary reductions, we obtain the scaled Newton system as follows:

Since the linear transformation \overline{\mathcal{A}} has the Cartesian {P}_{\ast}(\kappa )-property, the system (51) has a unique solution [8].

So far we have described the scheme that defines the classical NT-direction for the Cartesian {P}_{\ast}(\kappa )-SCLCP. The approach in this paper differs only in one detail. Given the finite kernel function \psi (t) defined by (3) and the associated vector-valued function {\psi}^{\prime}(v) defined by (14) and (15), we replace the right-hand side of the second equation in (51) by -{\psi}^{\prime}(v), *i.e.*, minus the derivative of the barrier function \mathrm{\Psi}(v). Thus we consider the following system:

Since the system (52) has the same matrix of coefficients as the system (51), also the system (52) has a unique solution.^{a}

The new search directions {d}_{x} and {d}_{s} are computed by solving (52), thus Δ*x* and Δ*s* are obtained from (50). If (x,s)\ne (x(\mu ),s(\mu )), then (\mathrm{\Delta}x,\mathrm{\Delta}s) is nonzero. By taking a default step size *α* along the search directions, we get the new iteration point ({x}_{+},{s}_{+}) according to

Furthermore, we can easily verify that

Hence, the value of \mathrm{\Psi}(v) can be considered as a measure for the distance between the given iterate (x,s) and the *μ*-center (x(\mu ),s(\mu )).

### 4.3 The generic interior-point algorithm for the Cartesian {P}_{\ast}(\kappa )-SCLCP

Define the *τ*-neighborhood of the central path as follows:

It is clear from the above description that the closeness of (x,s) to (x(\mu ),s(\mu )) is measured by the value of \mathrm{\Psi}(v), with \tau >0 as a threshold value. If \mathrm{\Psi}(v)\le \tau, then we start a new *outer iteration* by performing a *μ*-update, *i.e.*, {\mu}_{+}:=(1-\theta )\mu; otherwise, we enter an *inner iteration* by computing the search directions using (52) and (50) at the current iterates with respect to the current value of *μ* and apply (53) to get new iterates. If necessary, we repeat the procedure until we find iterates that are in the *τ*-neighborhood of the central path. Then *μ* is again reduced by the factor 1-\theta with 0<\theta <1, and we apply inner iteration(s) targeting at the new *μ*-centers, and so on. This process is repeated until *μ* is small enough, say until r\mu <\epsilon. At this stage, we have found a *ε*-approximate solution of the Cartesian {P}_{\ast}(\kappa )-SCLCP.

The generic interior-point algorithm for the Cartesian {P}_{\ast}(\kappa )-SCLCP is now presented as follows.

## 5 Analysis of the algorithm

In this section, we first discuss the growth behavior of the barrier function during an outer iteration. Next, we choose the default step size and obtain an upper bound for the decrease of the barrier function during an inner iteration. Finally, we show that the default step size yields sufficient decrease of the barrier function value during each inner iteration.

### 5.1 Growth behavior of the barrier function during an outer iteration

It should be mentioned that during the course of the algorithm the largest values of \mathrm{\Psi}(v) occur just after the update of *μ*. So, next we derive an estimate for the effect of a *μ*-update on the value of \mathrm{\Psi}(v).

It follows from (32) that

which means that \mathrm{\Psi}(\beta v) depends only on the eigenvalues {\lambda}_{i}({v}^{(j)}) of the symmetric cone {v}^{(j)} for each *j*, j=1,\dots ,N. The growth behavior of the proximity \mathrm{\Psi}(v) is precisely the same as its LO counterpart \mathrm{\Psi}(\beta v) (*cf.* Theorem 3.2 in [18]).

**Theorem 5.1** *If* v\in {\mathcal{K}}_{+} *and* \beta \ge 1, *then*

**Corollary 5.2** *Let* 0<\theta <1 *and* {v}_{+}=\frac{v}{\sqrt{1-\theta}}. *If* \mathrm{\Psi}(v)\le \tau, *then*

*Proof* With \beta =\frac{1}{\sqrt{1-\theta}}>1 and \mathrm{\Psi}(v)\le \tau, the corollary follows immediately from Theorem 5.1. □

### 5.2 Choice of the default step size

From (53) and (50), after some elementary reductions, we have

Thus,

or equivalently,

where, according to Lemma 4.2,

To calculate the decrease of the barrier function \mathrm{\Psi}(v) during an inner iteration, it is standard to consider the decrease as a function of *α* defined by

Our aim is to find an upper bound for f(\alpha ) by using the exponential convexity of \psi (t), and according to Lemma 3.1. In order to do this, we assume for the moment that

However, working with f(\alpha ) may not be easy because in general f(\alpha ) is not convex. Thus, we are searching for the convex function {f}_{1}(\alpha ) that is an upper bound of f(\alpha ) and whose derivatives are easier to calculate than those of f(\alpha ). The key element in this process is replacing {v}_{+} with a similar element that will allow the use of exponential-convexity of the barrier function. By Proposition 5.9.3 in [29], we have

and therefore

Theorem 3.3 implies that

Hence, we have

which means that {f}_{1}(\alpha ) gives an upper bound for the decrease of the barrier function \mathrm{\Psi}(v). Furthermore, we can conclude that f(0)={f}_{1}(0)=0.

From (40), we have

This gives, by (52) and (36),

Hence, we can conclude that {f}_{1}^{\prime}(\alpha ) is monotonically decreasing in a neighborhood of \alpha =0.

Furthermore, we have, by (41) and (42),

Contrary to the LO case, the vectors {d}_{x} and {d}_{s} are not necessarily orthogonal any more. However, the Cartesian {P}_{\ast}(\kappa )-property of SCLCP still allows us to find a good lower bound of the inner product \u3008{d}_{x},{d}_{s}\u3009.

In order to facilitate discussion, we denote

**Lemma 5.3**
*One has*

*Proof* Since the linear transformation \mathcal{A} has the Cartesian {P}_{\ast}(\kappa )-property, we have

where {\mathcal{I}}_{+}(\mathrm{\Delta}x)=\{j:\u3008\mathrm{\Delta}{x}^{(j)},{[\mathcal{A}(\mathrm{\Delta}x)]}^{(j)}\u3009>0\} and {\mathcal{I}}_{-}(\mathrm{\Delta}x)=\{j:\u3008\mathrm{\Delta}{x}^{(j)},{[\mathcal{A}(\mathrm{\Delta}x)]}^{(j)}\u3009<0\} are two index sets. It follows from (50) and \mathcal{A}(\mathrm{\Delta}x)=\mathrm{\Delta}s that \u3008\mathrm{\Delta}x,\mathrm{\Delta}s\u3009=\mu \u3008{d}_{x},{d}_{s}\u3009. This enables us to rewrite (59) as

Hence, it follows that

Using the arithmetic-geometric mean inequality \u3008a,b\u3009\le \frac{1}{4}\u3008a+b,a+b\u3009, we have

Substitution of this inequality into (61) yields

This completes the proof of the lemma. □

The key steps in the analysis of the algorithm are based on the effort to find upper bounds on \parallel {d}_{x}\parallel and \parallel {d}_{s}\parallel in terms of the proximity measure *δ*. The following lemma yields their upper bounds.

**Lemma 5.4**
*One has*

*Proof* From Lemma 5.3, we have

This implies the inequalities in the statement of the lemma. □

**Lemma 5.5**
*One has*

*Proof* From Lemma 2.1 and Lemma 5.4, we have

Let

be the Peirce decomposition of {d}_{x}^{(j)} with respect to the Jordan frame \{{c}_{1}^{(j)},\dots ,{c}_{{r}_{j}}^{(j)}\}, and let

be the Peirce decomposition of {d}_{s}^{(j)} with respect to the Jordan frame \{{b}_{1}^{(j)},\dots ,{b}_{{r}_{j}}^{(j)}\}. We have

and

Since {\psi}^{\u2033}(t) is monotonically decreasing in t\in (0,+\mathrm{\infty}), we have, by (57),

The last inequality holds due to the fact that (62). This completes the proof of the lemma. □

From this point on, the analysis of the algorithm follows almost completely the similar analyses in [17, 22] with straightforward modifications that take into account the Cartesian {P}_{\ast}(\kappa )-property. Therefore, the intermediate results are omitted and only main results are mentioned without the proofs.

In particular, the step size *α* satisfies the following condition:

It follows from (63) and the definition of *ρ* that

Using the second equation of (64), we have

It follows from Corollary 3.8 and \mathrm{\Psi}(v)\ge 1 that

Hence, we have

In the sequel, we use the notation

and we will use \tilde{\alpha} as the default step size. It is obvious that \alpha \ge \tilde{\alpha}.

Now, to validate the above analysis, we need to show that \tilde{\alpha} satisfies (56). In fact, from Lemmas 2.1, 3.2, 5.4 and (65), we have

and

### 5.3 Decrease of the value of \mathrm{\Psi}(v) during an inner iteration

In what follows, we will show that the barrier function \mathrm{\Psi}(v) in each inner iteration with the default step size \tilde{\alpha}, as defined by (65), is decreasing. For this, we need the following technical result.

**Lemma 5.6** (Lemma 3.12 in [19])

*Let* h(t) *be a twice differentiable convex function with* h(0)=0, {h}^{\prime}(0)<0 *and let* h(t) *attain its* (*global*) *minimum at* {t}^{\ast}>0. *If* {h}^{\u2033}(t) *is increasing for* t\in [0,{t}^{\ast}], *then*

As a consequence of Lemma 5.6 and the fact that f(\alpha )\le {f}_{1}(\alpha ), which is a twice differentiable convex function with {f}_{1}(0)=0, and {f}_{1}^{\prime}(0)=-2{\delta}^{2}<0, we can easily prove the following lemma.

**Lemma 5.7** *If the step size* *α* *is such that* \alpha \le \tilde{\alpha}, *then*

The following theorem states the results which show that the default step size (65) yields sufficient decrease of the barrier function value during each inner iteration.

**Theorem 5.8**
*One has*

*Proof* From Lemma 5.7, Corollary 3.8 and (65), we have

This completes the proof of the theorem. □

## 6 Complexity of the algorithm

In this section, we first derive an upper bound for the number of the iteration bounds by our algorithm. Then we obtain the iteration bounds that match the currently best known iteration bounds for large- and small-update methods, respectively.

### 6.1 Iteration bound for a large-update method

For the analysis of the iterations of the algorithm, we need to count how many inner iterations are required to return to the situation where \mathrm{\Psi}(v)\le \tau. We denote the value of \mathrm{\Psi}(v) after the *μ*-update as {\mathrm{\Psi}}_{0}, the subsequent values in the same outer iteration are denoted as {\mathrm{\Psi}}_{k}, k=1,\dots ,K, where *K* denotes the total number of inner iterations in the outer iteration. According to the decrease of f(\tilde{\alpha}), we get

where \beta =\frac{1}{96\sigma (1+2\kappa )} and \gamma =\frac{1}{2}.

**Lemma 6.1** (Lemma 14 in [19])

*Suppose that*
{t}_{0},{t}_{1},\dots ,{t}_{K}
*is a sequence of positive numbers such that*

*where* \beta >0 *and* 0<\gamma \le 1. *Then* K\le \lceil \frac{{t}_{0}^{\gamma}}{\beta \gamma}\rceil.

Combining Lemma 6.1 and (66), we can easily verify the following main result.

**Theorem 6.2**
*One has*

By applying Corollary 5.2, (34), and the fact that \psi (t)\le \frac{{t}^{2}}{2} when t\ge 1, we have

From the above expression with \theta =\mathrm{\Theta}(1) and \tau =O(r), and also applying Lemma 3.2, we can conclude that \sigma =O(logr).

The number of outer iterations is bounded above by \frac{1}{\theta}log\frac{r}{\epsilon} (*cf.* Lemma Π.17 in [13]). By multiplying the number of outer iterations and the number of inner iterations, we get an upper bound for the total number of iterations, namely

After some elementary reductions, we have the following theorem, which gives the currently best known iteration bound for the large-update method.

**Theorem 6.3** *For the large*-*update method*, *which is characterized by* \theta =\mathrm{\Theta}(1) *and* \tau =O(r), *then the algorithm requires at most*

*iterations*. *The output gives a* *ε*-*approximate solution of the Cartesian* {P}_{\ast}(\kappa )-*SCLCP*.

### 6.2 Iteration bound for a small-update method

It is not hard to show that if the above analysis is used for a small-update method, the iteration bound would not be as good as it can be for these types of methods. For the analysis of the iteration bound of a small-update method, we need to estimate the upper bound of {\mathrm{\Psi}}_{0} more accurately. It should be noted that the following analysis only holds for \sigma \ge 2.

By applying Corollary 5.2, (35), Lemma 3.4, and the fact that 1-\sqrt{1-\theta}=\frac{\theta}{1+\sqrt{1-\theta}}\le \theta, we have

From the above expression with \theta =\mathrm{\Theta}(\frac{1}{\sqrt{r}}) and \tau =O(1), and also applying Lemma 3.2, we can conclude that \sigma =O(1). It follows from Theorem 6.2 that the total number of iterations is bounded above by

After some elementary reductions, we have the following theorem, which gives the currently best known iteration bound for a small-update method.

**Theorem 6.4** *For a small*-*update method*, *which is characterized by* \theta =\mathrm{\Theta}(\frac{1}{\sqrt{r}}) *and* \tau =O(1), *then the algorithm requires at most*

*iterations*. *The output gives a* *ε*-*approximate solution of the Cartesian* {P}_{\ast}(\kappa )-*SCLCP*.

## 7 Conclusions and remarks

In this paper, we have shown that primal-dual IPMs for LO [17] and {P}_{\ast}(\kappa )-LCP [22] based on the finite barrier can be extended to the context of the Cartesian {P}_{\ast}(\kappa )-SCLCP. The iteration bounds for large- and small-update methods are obtained, namely O((1+2\kappa )\sqrt{r}logrlog\frac{r}{\epsilon}) and O((1+2\kappa )\sqrt{r}log\frac{r}{\epsilon}), respectively. In both cases, we were able to match the best known iteration bounds for these types of methods. Moreover, this unifies the analysis for the {P}_{\ast}(\kappa )-LCP, the Cartesian {P}_{\ast}(\kappa )-SOCLCP, and the Cartesian {P}_{\ast}(\kappa )-SDLCP.

Some interesting topics for further research remain. One possible topic is to investigate whether it is possible to replace NT-scaling scheme by some other scaling schemes and still obtain polynomial-time iteration bounds. Another worthwhile direction for further research may be the development of infeasible kernel-based IPMs for SCLCP.

## Endnote

^{a} It may be worth mentioning that if we use the kernel function of the classical logarithmic barrier function, *i.e.*, \psi (t)=\frac{1}{2}({t}^{2}-1)-logt, then {\psi}^{\prime}(t)=t-{t}^{-1}, whence -{\psi}^{\prime}(v)={v}^{-1}-v, and hence system (52) then coincides with the classical system (51).

## References

Faybusovich L: Linear system in Jordan algebras and primal-dual interior-point algorithms.

*J. Comput. Appl. Math.*1997, 86: 149–175. 10.1016/S0377-0427(97)00153-2Yoshise A: Complementarity problems over symmetric cones: a survey of recent developments in several aspects. International Series in Operational Research and Management Science 166. In

*Handbook on Semidefinite, Conic and Polynomial Optimization: Theory, Algorithms, Software and Applications*. Edited by: Anjos MF, Lasserre JB. Springer, New York; 2012:339–376.Faybusovich L: Euclidean Jordan algebras and interior-point algorithms.

*Positivity*1997, 1: 331–357. 10.1023/A:1009701824047Potra FA: Potra, FA: An infeasible interior point method for linear complementarity problems over symmetric cones. In: Simos, T (ed.) Proceedings of the 7th International Conference of Numerical Analysis and Applied Mathematics, Rethymno, Crete, Greece, 18-22 September 2009, pp. 1403-1406. Am. Inst. of Phys., New York (2009)

Yoshise A: Interior point trajectories and a homogeneous model for nonlinear complementarity problems over symmetric cones.

*SIAM J. Optim.*2006, 17: 1129–1153.Chen X, Qi HD: Cartesian

*P*-property and its applications to the semidefinite linear complementarity problem.*Math. Program.*2006, 106: 177–201. 10.1007/s10107-005-0601-8Pan SH, Chen JS:A regularization method for the second-order cone complementarity problem with the Cartesian {P}_{0}-property.

*Nonlinear Anal.*2009, 70: 1475–1491. 10.1016/j.na.2008.02.028Luo ZY, Xiu NH:Path-following interior point algorithms for the Cartesian {P}_{\ast}(\kappa )-LCP over symmetric cones.

*Sci. China Ser. A*2009, 52: 1769–1784.Kojima M, Megiddo N, Noma T, Yoshise A Lecture Notes in Computer Science 538. In

*A Unified Approach to Interior Point Algorithms for Linear Complementarity Problems*. Springer, New York; 1991.Wang GQ, Bai YQ: A class of polynomial interior-point algorithms for the Cartesian P-matrix linear complementarity problem over symmetric cones.

*J. Optim. Theory Appl.*2012, 152: 739–772. 10.1007/s10957-011-9938-8Lesaja L, Wang GQ, Zhu DT:Interior-point methods for Cartesian {P}_{\ast}(\kappa )-linear complementarity problems over symmetric cones based on the eligible kernel functions.

*Optim. Methods Softw.*2012, 27: 827–843. 10.1080/10556788.2012.670858Wang GQ, Lesaja G:Full Nesterov-Todd step feasible interior-point method for the Cartesian {P}_{\ast}(\kappa )-SCLCP.

*Optim. Methods Softw.*2013, 28: 600–618. 10.1080/10556788.2013.781600Roos C, Terlaky T, Vial J-P:

*Theory and Algorithms for Linear Optimization. An Interior-Point Approach*. Wiley, Chichester; 1997. (2nd edn., Springer, New York (2006))Gu G, Zangiabadi M, Roos C: Full Nesterov-Todd step interior-point method for symmetric optimization.

*Eur. J. Oper. Res.*2011, 214: 473–484. 10.1016/j.ejor.2011.02.022Liu LX, Liu SY, Wang CF:Smooth Newton methods for symmetric cone linear complementarity problem with the Cartesian P/{P}_{0}-property.

*J. Ind. Manag. Optim.*2011, 7: 53–66.Huang ZH, Lu N:Global and global linear convergence of a smooth algorithm for the Cartesian {P}_{\ast}(\kappa )-SCLCP.

*J. Ind. Manag. Optim.*2012, 8: 67–86.Bai YQ, Ghami ME, Roos C: A new efficient large-update primal-dual interior-point method based on a finite barrier.

*SIAM J. Optim.*2003, 13: 766–782.Bai YQ, Ghami ME, Roos C: A comparative study of kernel functions for primal-dual interior-point algorithms in linear optimization.

*SIAM J. Optim.*2004, 15: 101–128. 10.1137/S1052623403423114Peng J, Roos C, Terlaky T: Self-regular functions and new search directions for linear and semidefinite optimization.

*Math. Program.*2002, 93: 129–171. 10.1007/s101070200296Ghami ME, Ivanov I, Melissen JBM, Roos C, Steihaug T: A polynomial-time algorithm for linear optimization based on a new class of kernel functions.

*J. Comput. Appl. Math.*2009, 224: 500–513. 10.1016/j.cam.2008.05.027Ghami ME, Roos C, Steihaug T: A generic primal-dual interior-point method for semidefinite optimization based on a new class of kernel functions.

*Optim. Methods Softw.*2010, 25: 387–403. 10.1080/10556780903239048Wang GQ, Bai YQ:Polynomial interior-point algorithms for {P}_{\ast}(\kappa ) horizontal linear complementarity problem.

*J. Comput. Appl. Math.*2009, 233: 248–263. 10.1016/j.cam.2009.07.014Faraut J, Korányi A:

*Analysis on Symmetric Cones*. Oxford University Press, New York; 1994.Baes M: Convexity and differentiability properties of spectral functions and spectral mappings on Euclidean Jordan algebras.

*Linear Algebra Appl.*2007, 422: 664–700. 10.1016/j.laa.2006.11.025Choi BK, Lee GM: New complexity analysis for primal-dual interior-point methods for self-scaled optimization problems.

*Fixed Point Theory Appl.*2012., 2012: Article ID 213Faybusovich L: A Jordan-algebraic approach to potential-reduction algorithms.

*Math. Z.*2002, 239: 117–129. 10.1007/s002090100286Korányi A: Monotone functions on formally real Jordan algebras.

*Math. Ann.*1984, 269: 73–76. 10.1007/BF01455996Schmieta SH, Alizadeh F: Extension of primal-dual interior-point algorithms to symmetric cones.

*Math. Program.*2003, 96: 409–438. 10.1007/s10107-003-0380-zVieira, MVC: Jordan algebraic approach to symmetric optimization. PhD thesis, Electrical Engineering, Mathematics and Computer Science, Delft University of Technology, The Netherlands (2007)

Nesterov YE, Todd MJ: Self-scaled barriers and interior-point methods for convex programming.

*Math. Oper. Res.*1997, 22: 1–42. 10.1287/moor.22.1.1Nesterov YE, Todd MJ: Primal-dual interior-point methods for self-scaled cones.

*SIAM J. Optim.*1998, 8: 324–364. 10.1137/S1052623495290209

## Acknowledgements

This work was supported by the National Natural Science Foundation of China (No. 11001169), China Postdoctoral Science Foundation funded project (No. 2012T50427) and Connotative Construction Project of Shanghai University of Engineering Science (No. NHKY-2012-13).

## Author information

### Authors and Affiliations

### Corresponding author

## Additional information

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

All authors carried out the proof. All authors conceived of the study and participated in its design and coordination. All authors read and approved the final manuscript.

## Rights and permissions

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## About this article

### Cite this article

Wang, G., Li, M., Yue, Y. *et al.* New complexity analysis of interior-point methods for the Cartesian {P}_{\ast}(\kappa )-SCLCP.
*J Inequal Appl* **2013**, 285 (2013). https://doi.org/10.1186/1029-242X-2013-285

Received:

Accepted:

Published:

DOI: https://doi.org/10.1186/1029-242X-2013-285