 Research
 Open Access
Kernelfunctionbased primaldual interiorpoint methods for convex quadratic optimization over symmetric cone
 Xinzhong Cai^{1},
 Lin Wu^{2},
 Yujing Yue^{1},
 Minmin Li^{2} and
 Guoqiang Wang^{2}Email author
https://doi.org/10.1186/1029242X2014308
© Cai et al.; licensee Springer. 2014
 Received: 29 March 2014
 Accepted: 24 July 2014
 Published: 21 August 2014
Abstract
In this paper, we give a unified computational scheme for the complexity analysis of kernelfunctionbased primaldual interiorpoint methods for convex quadratic optimization over symmetric cone. By using Euclidean Jordan algebras, the currently bestknown iteration bounds for large and smallupdate methods are derived, namely, $O(\sqrt{r}logrlog\frac{r}{\epsilon})$ and $O(\sqrt{r}log\frac{r}{\epsilon})$, respectively. Furthermore, this unifies the analysis for a wide class of conic optimization problems.
MSC:90C25, 90C51.
Keywords
 interiorpoint methods
 convex quadratic optimization
 kernel function
 Euclidean Jordan algebras
 large and smallupdate methods
 polynomial complexity
1 Introduction
Since the groundbreaking paper of Karmarkar, many researchers have proposed and analyzed various interiorpoint methods (IPMs) for linear optimization (LO) and a large amount of results have been reported [1–4]. However, there is a gap between the practical behavior of the IPMs and the theoretical performance results. The socalled smallupdate IPMs enjoy the bestknown worstcase iteration bound $O(\sqrt{n}log\frac{n}{\epsilon})$ but their performance in computational practice is poor. In practice, however, the socalled largeupdate IPMs are much more efficient than smallupdate IPMs but with relatively weak theoretical result $O(nlog\frac{n}{\epsilon})$. Recently, Peng et al. [5] introduced socalled selfregular barrier functions for primaldual IPMs for LO, the iteration bound for largeupdate methods for LO was improved from $O(nlog\frac{n}{\epsilon})$ to $O(\sqrt{n}lognlog\frac{n}{\epsilon})$, which almost closes the gap between the iteration bounds for large and smallupdate methods. Bai et al. [6] presented a large class of eligible kernel functions, which is fairly general and includes the classical logarithmic function and the selfregular functions, as well as many nonselfregular functions as special cases. The bestknown iteration bounds for LO obtained are as good as the ones in [5] for appropriate choices of the eligible kernel functions. For some other related kernelbased IPMs we refer to [7–27].
where ${\mathcal{A}}^{T}$ is the adjoint of $\mathcal{A}$. Many researchers have studied CQSCO and achieved plentiful and beautiful results. For an overview of these results we refer to [28–34].
where μ is a positive parameter that is to be driven to zero explicitly. Since the IPC holds and $\mathcal{A}$ is surjective, the parameterized system (1) has a unique solution $(x(\mu ),y(\mu ),s(\mu ))$ for each $\mu >0$, and we call $x(\mu )$ the μcenter of $(P)$ and $(y(\mu ),s(\mu ))$ the μcenter of (D). The set of μcenters gives a homotopy path (with μ running through all the positive real numbers), which is called the central path. If $\mu \to 0$, then the limit of the central path exists and since the limit points satisfy the complementarity condition, i.e., $x\circ s=0$, it naturally yields an optimal solution for (P) and (D) (see, e.g., [29, 35]).
The appropriate choices of u that lead to obtaining the unique search directions from the above system are called commutative class of search directions (see, e.g., [36]). In this paper, we consider the socalled NTscaling scheme, the resulting direction is called NT search direction. This scaling scheme was first proposed by Nesterov and Todd [37, 38] for selfscaled cones and then adapted by Faybusovich [35, 39] for symmetric cones.
Lemma 1.1 (Lemma 3.2 in [39])
where $\overline{\mathcal{A}}=\frac{\mathcal{A}P{(w)}^{\frac{1}{2}}}{\sqrt{\mu}}$ and $\overline{\mathcal{Q}}=P{(w)}^{\frac{1}{2}}\mathcal{Q}P{(w)}^{\frac{1}{2}}$. We can easily verify that the system (6) has a unique solution (see, e.g., [29, 35]).
Since (7) has the same matrix of coefficients as (6), also (7) has a unique solution.^{a} It follows that the eligible kernel function ${\psi}^{\prime}(t)$ determines in a natural way search directions for an interiorpoint algorithm.
Similarly to the LO case, we require that the step size α should be taken so that the proximity measure function $\mathrm{\Psi}(v)$ decreases sufficiently. A default bound for such a step size α will be given later by (38).
Hence, the value of $\mathrm{\Psi}(v)$ can be considered as a measure for the distance between the given iterate $(x,y,s)$ and the corresponding μcenter $(x(\mu ),y(\mu ),s(\mu ))$.
Given any eligible kernel function $\psi (t)$, the parameters τ, θ and the step size α should be chosen in such a way that the algorithm is ‘optimized’ in the sense that the number of iterations required by the algorithm is as small as possible. We will prove that the resulting iteration bounds depend on the eligible kernel functions in Section 5.
The purpose of the paper is to propose a unified analysis of kernelfunctionbased primaldual IPMs for CQSCO and give a general scheme on how to calculate the iteration bounds for the entire class of eligible kernel functions. The obtained complexity results match the bestknown iteration bounds known for largeupdate methods, $O(\sqrt{r}logrlog\frac{r}{\epsilon})$ and smallupdate methods, $O(\sqrt{r}log\frac{r}{\epsilon})$. The order of the iteration bounds are derived as good as the ones for the LO case except that n is replaced by r, the rank of EJA. Although expected, these results were not obvious and, at certain steps of the analysis, they were not trivial and/or straightforward generalization of the LO case. Furthermore, this unifies the analysis for a wide class of conic optimization problems, which includes LO, CQO, SOCO, SDO, CQSDO, SCO and so on.
The outline of the paper is as follows. In Section 2, we provide some basic concepts and useful results on EJAs and symmetric cones. In Section 3, we recall and develop some useful properties of the eligible kernel functions and the corresponding barrier functions. In Section 4, we uniformly analyze the primaldual IPMs for CQSCO. In Section 5, we derive the complexity bounds for large and smallupdate methods. In Section 6, we report some preliminary numerical experiments. Finally, some conclusions and remarks are made in Section 7.
The following notations are used throughout the paper. ${\mathbf{R}}^{n}$, ${\mathbf{R}}_{+}^{n}$, and ${\mathbf{R}}_{++}^{n}$ denote the set of all vectors (with n components), the set of nonnegative vectors and the set of positive vectors, respectively. ${\mathbf{R}}^{m\times n}$ is the space of all $m\times n$ matrices. ${\mathbf{S}}^{n}$, ${\mathbf{S}}_{+}^{n}$ and ${\mathbf{S}}_{++}^{n}$ denote the cones of symmetric, symmetric positive semidefinite and symmetric positive definite $n\times n$ matrices, respectively. We use the matrix inner product $A\u2022B=tr({A}^{T}B)$, i.e., the trace of the matrix ${A}^{T}B$. The largest eigenvalue and the smallest eigenvalue of x are defined by ${\lambda}_{\mathrm{max}}(x)$ and ${\lambda}_{\mathrm{min}}(x)$, respectively. The Löwner partial ordering ‘${\u2ab0}_{\mathcal{K}}$’ of $\mathcal{V}$ defined by a symmetric cone $\mathcal{K}$ is defined by $x{\u2ab0}_{\mathcal{K}}s$ if $xs\in \mathcal{K}$. The interior of $\mathcal{K}$ is denoted as $int\mathcal{K}$ and we write $x{\succ}_{\mathcal{K}}s$ if $xs\in int\mathcal{K}$. Finally, if $g(x)\ge 0$ is a realvalued function of a real nonnegative variable, the notation $g(x)=O(x)$ means that $g(x)\le \overline{c}x$ for some positive constant $\overline{c}$, and $g(x)=\mathrm{\Theta}(x)$ that ${c}_{1}x\le g(x)\le {c}_{2}x$ for the two positive constants ${c}_{1}$ and ${c}_{2}$.
2 Preliminaries
where $L{(x)}^{2}=L(x)L(x)$, respectively.
is indeed a symmetric cone (cf. Theorem III.2.1 in [40]). In the sequel, $\mathcal{K}$ will always denote a symmetric cone, and $\mathcal{V}$ an EJA with $rank(\mathcal{V})=r$ for which $\mathcal{K}$ is its cone of squares.
The following theorem gives an important decomposition, the spectral decomposition, on the space $\mathcal{V}$.
Theorem 2.1 (Theorem III.1.2 in [40])
respectively.
It should be noted that ${\psi}^{\prime}(x)$ is just a vectorvalued function induced by the derivative ${\psi}^{\prime}(t)$ of the function $\psi (t)$ rather than the derivative of the vectorvalued function $\psi (x)$ defined by (14).
The following theorem provides another important decomposition, the Peirce decomposition, on the space $\mathcal{V}$.
Theorem 2.2 (Theorem IV.2.1 in [40])
Lemma 2.3 (Lemma 14 in [36])
respectively.
The following two theorems give explicitly the first derivatives of $F(x)$ and $G(x)$, respectively.
Theorem 2.4 (Theorem 38 in [41])
Theorem 2.5 (Lemma 1 in [42])
where $1\le i<j\le r$.
3 Properties of the eligible kernel (barrier) functions
This means that $\psi (t)$ is strictly convex and minimal at $t=1$, with $\psi (1)=0$. Moreover, (22c) implies that $\psi (t)$ has the barrier property.
Note that the first four conditions are logically independent, and the fifth condition is a consequence of (23b) and (23c). Since (23b) is much simpler to check than (23e), in many cases it is easy to know that $\psi (t)$ is eligible if it satisfies the first four conditions [6].
The following lemma is cited from [6] to state the exponential convexity, which plays an important role in the analysis of kernelfunctionbased primaldual IPMs [5, 6].
Lemma 3.1 (Lemma 2.1 in [6])
where $\mathrm{\nabla}\mathrm{\Psi}(v)$ denotes the derivative of the barrier function $\mathrm{\Psi}(v)$.
As a consequence of Lemma 3.1, we have the following important result.
Theorem 3.2 (Theorem 4.3.2 in [23])
By applying Theorem 3.2 in [6], with x being the vector in ${\mathbf{R}}^{r}$ consisting of all the eigenvalues of the symmetric cone v, the theorem below immediately follows.
Proof With $\beta =\frac{1}{\sqrt{1\theta}}\ge 1$ and $\mathrm{\Psi}(v)\le \tau $, the corollary follows immediately from Theorem 3.3. □
Hence, we can conclude that $\delta (v)\ge 0$, and $\delta (v)=0$ if and only if $\mathrm{\Psi}(v)=0$.
It follows from (25) and (28) that $\delta (v)$ and $\mathrm{\Psi}(v)$ depend only on the eigenvalues ${\lambda}_{i}(v)$ of the symmetric cone $\mathcal{V}$. This observation makes it possible to apply Theorem 4.8 in [6], with x being the vector in ${\mathbf{R}}^{r}$ consisting of all the eigenvalues of the symmetric cone v. This gives the following theorem, which yields a lower bound on $\delta (v)$ in terms of $\mathrm{\Psi}(v)$.
which bounds the secondorder derivative of $\mathrm{\Psi}(x(t))$ with respect to t (see, e.g., [23]).
4 Analysis of the algorithms
which means that ${f}_{1}(\alpha )$ gives an upper bound for the decrease of the barrier function $\mathrm{\Psi}(v)$. Furthermore, we can easily verify that $f(0)={f}_{1}(0)=0$.
In the sequel, we use the short notation $\delta :=\delta (v)$.
This completes the proof of the lemma. □
Similar to the proof of Lemma 4.1 in [6], we have the following lemma, which gives an upper bound of ${f}_{1}^{\u2033}(\alpha )$ in terms of δ and ${\psi}^{\u2033}(t)$.
as the default step size.
In what follows, we will show that the barrier function $\mathrm{\Psi}(v)$ in each inner iteration with the default step size $\tilde{\alpha}$, as defined by (38), is decreasing. For this, we need the following technical result.
Lemma 4.3 (Lemma 3.12 in [5])
As a consequence of Lemma 4.3 and the fact that $f(\alpha )\le {f}_{1}(\alpha )$, which is a twice differentiable convex function with ${f}_{1}(0)=0$, and ${f}_{1}^{\prime}(0)=2{\delta}^{2}<0$, we can easily prove the following lemma.
Combining the results of Lemma 4.4 and (38), we have the following theorem, which shows that the default step size (38) yields a sufficient decrease of the barrier function value during each inner iteration.
This expresses the decrease in $\mathrm{\Psi}(v)$ during an inner iteration completely in ψ, its first and second derivatives, and the inverse functions ρ and ϱ.
5 Complexity of the algorithms
In this section, we first derive an upper bound for the number of the iteration bounds by the algorithm depicted in Figure 1. Then we conclude this section by applying the iteration bound to a wide variety of kernel functions.
5.1 Iteration bounds for the algorithms
We need to count how many inner iterations are required to return to the situation where $\mathrm{\Psi}(v)\le \tau $. We use the value of $\mathrm{\Psi}(v)$ after the μupdate by ${\mathrm{\Psi}}_{0}$, and the subsequent values in the same outer iteration are denoted as ${\mathrm{\Psi}}_{k}$, $k=1,2,\dots ,K$, where K denotes the total number of inner iterations in the outer iteration.
Note that the lefthand side expression is increasing in $\mathrm{\Psi}(v)$. Therefore, such numbers β and γ certainly exist (take, e.g., $\gamma =1$ and β equals the value of the lefthand side expression for $\mathrm{\Psi}(v)=\tau $). In addition, the appropriate values of β and γ will vary for each eligible kernel function and finding them may not always be straightforward.
The following lemma provides an estimate for the number of inner iterations between two successive barrier parameter updates, in terms of ${\mathrm{\Psi}}_{0}$ and the parameters β and γ.
Thus, the conclusion of the lemma follows immediately from Lemma 14 in [5] with ${t}_{k}={\mathrm{\Psi}}_{k}$. This completes the proof of the lemma. □
The number of outer iterations coincides with the number of barrier parameter θ updates until we obtain $r\mu <\epsilon $. It is well known (cf. Lemma Π.17 in [1]) that the number of outer iterations is bounded above by $\frac{1}{\theta}log\frac{r}{\epsilon}$. Thus, an upper bound on the total number of iterations is obtained by multiplying the number of outer iterations and the number of inner iterations.
which means that the total number of iterations is completely determined by the parameters θ, β, γ, τ, and the eligible kernel function $\psi (t)$.
5.2 Application to the eligible kernel functions
It follows from Theorem 5.2 that the iteration bound of the algorithms depends on the parameters β and γ and the upper bound on ${\mathrm{\Psi}}_{0}$. Since these are different for different eligible kernel functions, the iteration bounds will also vary. Similar to the analysis considered in [6], Section 6.1, for the LO case, the iteration bounds for large and smallupdate methods based on the eligible kernel functions can be performed in a systematic way by using the following scheme.

Step 0: Input an eligible kernel function $\psi (t)$; an update parameter θ, $0<\theta <1$; a threshold parameter τ; and an accuracy parameter ε.

Step 1: Solve the equation $\frac{1}{2}{\psi}^{\prime}(t)=s$ to get $\rho (s)$, the inverse function of $\frac{1}{2}{\psi}^{\prime}(t)$, $t\in (0,1]$. If the equation is hard to solve, derive a lower bound for $\rho (s)$.

Step 2: Calculate the decrease of $\mathrm{\Psi}(v)$ in terms of δ for the default step size $\tilde{\alpha}$ from$f(\tilde{\alpha})\le \frac{{\delta}^{2}}{{\psi}^{\u2033}(\rho (2\delta ))}.$

Step 3: Solve the equation $\psi (t)=s$ to get $\varrho (s)$, the inverse function of $\psi (t)$, $t\ge 1$. If the equation is hard to solve, derive the lower and upper bounds for $\varrho (s)$.

Step 4: Derive a lower bound for $\delta (v)$ in terms of $\mathrm{\Psi}(v)$ by using$\delta (v)\ge \frac{1}{2}{\psi}^{\prime}\left(\varrho (\mathrm{\Psi}(v))\right).$

Step 5: Using the results of Step 3 and Step 4 find positive constants β and γ, with $\gamma \in (0,1]$, such that$f(\tilde{\alpha})\le \beta \mathrm{\Psi}{(v)}^{1\gamma}.$

Step 6: Calculate the uniform upper bound ${\mathrm{\Psi}}_{0}$ for $\mathrm{\Psi}(v)$ from${\mathrm{\Psi}}_{0}\le {L}_{\psi}(r,\theta ,\tau ):=r\psi \left(\frac{\varrho (\frac{\tau}{r})}{\sqrt{1\theta}}\right).$

Step 7: Derive an upper bound for the total number of iterations from$\frac{{\mathrm{\Psi}}_{0}^{\gamma}}{\theta \beta \gamma}log\frac{r}{\epsilon}.$

Step 8: Set $\tau =O(r)$ and $\theta =\mathrm{\Theta}(1)$ so as to calculate an iteration bound for largeupdate methods, or set $\tau =O(1)$ and $\theta =\mathrm{\Theta}(\frac{1}{\sqrt{r}})$ to get an iteration bound for smallupdate methods.
Complexity results for the eligible kernel functions
i  The eligible kernel functions ${\mathit{\psi}}_{\mathit{i}}\mathbf{(}\mathit{t}\mathbf{)}$  Largeupdate methods  Smallupdate methods  Ref. 

1  $\frac{{t}^{2}1}{2}logt$  $O(rlog\frac{r}{\epsilon})$  $O(\sqrt{r}log\frac{r}{\epsilon})$  e.g., [1] 
2  $\frac{1}{2}{(t\frac{1}{t})}^{2}$  $O({r}^{\frac{2}{3}}log\frac{r}{\epsilon})$  $O(\sqrt{r}log\frac{r}{\epsilon})$  [6] 
3  $\frac{{t}^{2}1}{2}+\frac{{t}^{1q}1}{q1}$, q>1  $O(q{r}^{\frac{q+1}{2q}}log\frac{r}{\epsilon})$  $O({q}^{2}\sqrt{r}log\frac{r}{\epsilon})$  [6] 
4  $\frac{{t}^{2}1}{2}+\frac{{t}^{1q}1}{q(q1)}\frac{q1}{q}(t1)$, q>1  $O(q{r}^{\frac{q+1}{2q}}log\frac{r}{\epsilon})$  $O({q}^{2}\sqrt{r}log\frac{r}{\epsilon})$  [5] 
5  $\frac{{t}^{2}1}{2}+\frac{{e}^{\frac{1}{t}}e}{e}$  $O(\sqrt{r}{(logr)}^{2}log\frac{r}{\epsilon})$  $O(\sqrt{r}log\frac{r}{\epsilon})$  [6] 
6  $\frac{{t}^{2}1}{2}{\int}_{1}^{t}{e}^{\frac{1}{\xi}1}\phantom{\rule{0.2em}{0ex}}d\xi $  $O(\sqrt{r}{(logr)}^{2}log\frac{r}{\epsilon})$  $O(\sqrt{r}log\frac{r}{\epsilon})$  [6] 
7  $\frac{{t}^{2}1}{2}+\frac{{e}^{q(\frac{1}{t}1)}q}{q}$, q ≥ 1  $O(q\sqrt{r}log\frac{r}{\epsilon})$  $O(q\sqrt{qr}log\frac{r}{\epsilon})$  [7] 
8  $\frac{{t}^{2}1}{2}{\int}_{1}^{t}{e}^{q(\frac{1}{\xi}1)}\phantom{\rule{0.2em}{0ex}}d\xi $, q ≥ 1  $O(q\sqrt{r}log\frac{r}{\epsilon})$  $O(q\sqrt{qr}log\frac{r}{\epsilon})$  [6] 
9  $\frac{{t}^{2}1}{2}+\frac{{(e1)}^{2}}{e}\frac{1}{{e}^{t}1}\frac{e1}{e}$  $O({r}^{\frac{3}{4}}log\frac{r}{\epsilon})$  $O(\sqrt{r}log\frac{r}{\epsilon})$  [10] 
10  $8{t}^{2}11t+1+\frac{2}{\sqrt{t}}4logt$  $O({r}^{\frac{5}{6}}log\frac{r}{\epsilon})$  $O(\sqrt{r}log\frac{r}{\epsilon})$  [19] 
11  $8{t}^{2}10t+\frac{2}{{t}^{3}}$  $O({r}^{\frac{5}{8}}log\frac{r}{\epsilon})$  $O(\sqrt{r}log\frac{r}{\epsilon})$  [14] 
12  $\frac{{t}^{2}1}{2}+\frac{6}{\pi}tan(\frac{\pi (1t)}{2+4t})$  $O({r}^{\frac{3}{4}}log\frac{r}{\epsilon})$  $O(\sqrt{r}log\frac{r}{\epsilon})$  [17] 
13  $\frac{{t}^{2}1}{2}log(t)+\frac{1}{8}{tan}^{2}(\frac{\pi (1t)}{2+4t})$  $O({r}^{\frac{2}{3}}log\frac{r}{\epsilon})$  $O(\sqrt{r}log\frac{r}{\epsilon})$  [21] 
14  $\frac{p({t}^{2}1)}{2}+\frac{{t}^{pq}1}{q(q+1)}\frac{pq(t1)}{q+1}$, p ≥ 1, q>0  $O(\sqrt{r}logrlog\frac{r}{\epsilon})$  $O(\sqrt{r}log\frac{r}{\epsilon})$  [15] 
15  $t+\frac{1}{t}2$  $O(rlog\frac{r}{\epsilon})$  $O(\sqrt{r}log\frac{r}{\epsilon})$  [9] 
16  $t1+\frac{{t}^{1q}1}{q1}$, q>1  $O(qrlog\frac{r}{\epsilon})$  $O({q}^{2}\sqrt{r}log\frac{r}{\epsilon})$  [6] 
17  $\frac{{t}^{p+1}1}{p+1}logt$, p∈[0,1]  $O(rlog\frac{r}{\epsilon})$  $O(\sqrt{r}log\frac{r}{\epsilon})$  [18] 
18  $\{\begin{array}{l}\frac{{t}^{p+1}1}{p+1}+\frac{{t}^{1q}1}{q1},t>0,p\in [0,1],q>1\\ \frac{{t}^{p+1}1}{p+1}logt,t>0,p\in [0,1],q=1\end{array}$  $O(q{r}^{\frac{p+q}{q(1+p)}}log\frac{r}{\epsilon})$  $O({q}^{2}\sqrt{r}log\frac{r}{\epsilon})$  [8] 
In particular, for ${\psi}_{3}(t)$ and ${\psi}_{4}(t)$ this bound is obtained if we choose $q=\frac{1}{2}logr$, and for ${\psi}_{7}(t)$ and ${\psi}_{8}(t)$ this bound is obtained if we choose $q=logr$. The same bound is achieved for ${\psi}_{18}(t)$, also by taking $p=1$ and $q=\frac{1}{2}logr$.
In particular, for ${\psi}_{3}(t)$, ${\psi}_{4}(t)$, ${\psi}_{7}(t)$, ${\psi}_{8}(t)$, ${\psi}_{16}(t)$ and ${\psi}_{18}(t)$, this bound is derived is we take $q=O(1)$.
Both for large and smallupdate methods, the order of the iteration bounds are obtained as good as the bounds for the LO case except that n is replaced by r, the rank of the EJA. Thus, the iteration bounds are as good as they can be in the current stateoftheart.
6 Numerical results
In this section, we report the computational performance of the algorithm depicted in Figure 1 for CQSDO, which is an important cases of CQSCO. The numerical experiments are carried out on a PC with Intel (R) Core (TM) i52500 Duo CPU at 3.30 GHz and 8 GB of physical memory. The PC runs MATLAB Version: 7.11.0.584 (R2010b) on a Windows 7 Enterprise 64bit operating system.
Here, $\mathcal{Q}:{\mathbf{S}}^{n}\to {\mathbf{S}}^{n}$ is a given selfadjoint positive semidefinite linear operator on ${\mathbf{S}}^{n}$, i.e., for any $A,B\in {\mathbf{S}}^{n}$, $\mathcal{Q}(A)\u2022B=A\u2022\mathcal{Q}(B)$ and $\mathcal{Q}(A)\u2022A\ge 0$. $b\in {\mathbf{R}}^{m}$ is a given vector, $C\in {\mathbf{R}}^{n\times n}$ is a given matrix. Without loss of generality we assume that the matrices ${A}_{i}$, $i=1,2,\dots ,m$ are linearly independent and CQSDO satisfy the IPC. The detailed discussion and analysis of primaldual IPMs for CQSDO can be found in [24, 28, 34].
It should be noted that the default step size (38) selected during each inner iteration is small enough for analyzing the algorithm, while in practice it should be chosen large enough for the efficiency of the algorithm. In the following test problem, we choose the maximum allowed step size such that the next iterate satisfying the positive semidefiniteness condition, i.e., $X+\alpha {D}_{X}\u2ab00$ and $S+\alpha {D}_{S}\u2ab00$.
In the test problems, we use the threshold parameter $\tau =3$, the accuracy parameter $\epsilon ={10}^{7}$, and the update parameter $\theta =\frac{1}{2\sqrt{n}}$ with $n=8$ in the implementation. In this case, the algorithm depicted in Figure 1 is indeed a smallupdate method. We choose $X=E$, $y=e$ and $S=E$ as the starting point for our algorithm. Here E and e denote the identity matrix of dimension 8 and the identity vector of dimension 4, respectively. Note that the point is strictly feasible. The initial value of the barrier parameter μ is $X\u2022S/n$ with $n=8$, i.e., 1. We can easily check that $\mathrm{\Psi}(X,S;\mu )=0<\tau =3$. So these data can, indeed, be used to initialize our algorithm.
The respective objective values are $\frac{1}{2}X\u2022\mathcal{Q}(X)+C\u2022X=32.959138158$ and $\frac{1}{2}X\u2022\mathcal{Q}(X)+{b}^{T}y=32.959138116$, and the duality gap $X\u2022S$ is $4.2163\times {10}^{8}$, which is less than 10^{−7}.
Output of IPM for the sample problem of CQSDO based on ${\mathit{\psi}}_{\mathbf{1}}\mathbf{(}\mathit{t}\mathbf{)}$ with $\mathit{\theta}\mathbf{=}\frac{\mathbf{1}}{\mathbf{2}\sqrt{\mathit{n}}}$
Iteration  $\frac{\mathbf{1}}{\mathbf{2}}\mathit{X}\mathbf{\u2022}\mathcal{Q}\mathbf{(}\mathit{X}\mathbf{)}\mathbf{+}\mathit{C}\mathbf{\u2022}\mathit{X}$  $\mathbf{}\frac{\mathbf{1}}{\mathbf{2}}\mathit{X}\mathbf{\u2022}\mathcal{Q}\mathbf{(}\mathit{X}\mathbf{)}\mathbf{+}{\mathit{b}}^{\mathit{T}}\mathit{y}$  Duality gap, i.e., X•S 

0  36.000000000  28.000000000  8.000000000<10^{1} 
3  33.179225445  32.617281779  0.561943666<10^{0} 
5  32.996521968  32.904241476  0.092280493<10^{−1} 
8  32.961211886  32.956512652  0.004699234<10^{−2} 
10  32.959417225  32.958771601  0.000645624<10^{−3} 
12  32.959173712  32.959084185  0.000089527<10^{−4} 
15  32.959139849  32.959135221  0.000004628<10^{−5} 
17  32.959138306  32.959137869  0.000000436<10^{−6} 
19  32.959138158  32.959138116  0.000000042<10^{−7} 
It is clear from Table 2 that the smallupdate method presented in this paper is not efficient from a practical point of view, just as the feasible IPMs with the best theoretical performance are far from practical. In fact, our algorithm suffers from the usual drawback of primaldual IPMs that the number of iterations needed for convergence leads to be close to the upper bound, namely, $O(\sqrt{n}log\frac{n}{\epsilon})$. This is due to the small, fixed μupdates (i.e., ${\mu}_{+}=(1\theta )\mu $ with $\theta =\frac{1}{2\sqrt{n}}$ for CQSDO). It is desirable to make the largest possible update θ at each iteration, albeit at the cost of extra computation.
Output of IPM for the sample problem of CQSDO based on ${\mathit{\psi}}_{\mathbf{1}}\mathbf{(}\mathit{t}\mathbf{)}$ with $\mathit{\theta}\mathbf{=}\mathbf{0.9}$
Iteration  $\frac{\mathbf{1}}{\mathbf{2}}\mathit{X}\mathbf{\u2022}\mathcal{Q}\mathbf{(}\mathit{X}\mathbf{)}\mathbf{+}\mathit{C}\mathbf{\u2022}\mathit{X}$  $\mathbf{}\frac{\mathbf{1}}{\mathbf{2}}\mathit{X}\mathbf{\u2022}\mathcal{Q}\mathbf{(}\mathit{X}\mathbf{)}\mathbf{+}{\mathit{b}}^{\mathit{T}}\mathit{y}$  Duality gap, i.e., X•S 

0  36.000000000  28.000000000  8.000000000<10^{1} 
2  33.345388371  32.346089274  0.999299097<10^{0} 
5  32.969032358  32.949621318  0.019411040<10^{−1} 
6  32.961389769  32.957573699  0.003816071<10^{−2} 
8  32.959247309  32.959032261  0.000215048<10^{−3} 
10  32.959142690  32.959127961  0.000014729<10^{−4} 
11  32.959138591  32.959136229  0.000002362<10^{−5} 
12  32.959138446  32.959137497  0.000000949<10^{−6} 
14  32.959138148  32.959138133  0.000000014<10^{−7} 
It is clear from Table 3 that the iteration number of the algorithm depend on the update parameter θ. A larger value of the update parameter θ gives rise to better results. However, it should be pointed out that the update parameter θ would be too large to solve the problem in the computational procedure. In the solution procedure, we might use the dynamic updates of the barrier parameter, as described in [1]. This may significantly enhance the practical performance of the proposed algorithm.
7 Conclusions and remarks
In this paper, we presented a unified approach and comprehensive treatment of primaldual IPMs for CQSCO based on the entire class of the eligible kernel functions. For largeupdate methods the best iteration bound is $O(\sqrt{r}logrlog\frac{r}{\epsilon})$ and for smallupdate methods all iteration bounds have the same order of magnitude, namely, $O(\sqrt{r}log\frac{r}{\epsilon})$, which almost closes the gap between the iteration bounds for large and smallupdate methods. Some preliminary numerical results are provided to demonstrate the computational performance of the algorithm depicted in Figure 1.
The paper generalizes results obtained in the following papers, [6] where Bai et al. consider kernelbased primaldual IPMs for LO, and [11, 16, 30] and [23] where Bai et al., El Ghami et al., Wang et al. and Vieira consider the same type of IPMs for SOCO, SDO, CQSDO and SCO, respectively. It turns out that the iterations bounds are the same as for the nonnegative orthant except that n is replaced by r, the rank of the EJA. However, the analysis of the proposed algorithm is far more complicated in [6, 11, 16, 23]. This is due to the following fact that we lose the orthogonality of search directions that exist in LO, SOCO, SDO, and SCO cases does not hold for CQSCO.
Some interesting topics for further research remain. First, the search direction used in this paper is based on the NTsymmetrization scheme and it is natural to ask if other symmetrization schemes can be used. Second, although we present a simple examples to show the computational performance of the proposed algorithm, more numerical experiments are desired to compare the behavior of our algorithm with other existing IPMs. Finally, the extension to general nonlinear optimization over symmetric cone deserves to be investigated.
Endnote
^{a} It may be worth mentioning that if we use the kernel function of the classical logarithmic barrier function, i.e., $\psi (t)=\frac{1}{2}({t}^{2}1)logt$, then ${\psi}^{\prime}(t)=t{t}^{1}$, whence ${\psi}^{\prime}(v)={v}^{1}v$, and hence system (7) then coincides with the classical system (6).
Declarations
Acknowledgements
This work was supported by Shanghai Natural Science Fund Project (14ZR1418900), National Natural Science Foundation of China (No. 11001169), China Postdoctoral Science Foundation funded project (Nos. 2012T50427, 20100480604) and Natural Science Foundation of Shanghai University of Engineering Science (No. 2014YYYF01).
Authors’ Affiliations
References
 Roos C, Terlaky T, Vial JP: Theory and Algorithms for Linear Optimization. An InteriorPoint Approach. Wiley, Chichester; 1997.MATHGoogle Scholar
 Wright SJ: PrimalDual InteriorPoint Methods. SIAM, Philadelphia; 1997.View ArticleMATHGoogle Scholar
 Ye Y: Interior Point Algorithms: Theory and Analysis. Wiley, Chichester; 1997.View ArticleMATHGoogle Scholar
 Anjos MF, Lasserre JB International Series in Operational Research and Management Science 166. In Handbook on Semidefinite, Conic and Polynomial Optimization: Theory, Algorithms, Software and Applications. Springer, New York; 2012.View ArticleGoogle Scholar
 Peng J, Roos C, Terlaky T: Selfregular functions and new search directions for linear and semidefinite optimization. Math. Program. 2002,93(1):129–171. 10.1007/s101070200296MathSciNetView ArticleMATHGoogle Scholar
 Bai YQ, El Ghami M, Roos C: A comparative study of kernel functions for primaldual interiorpoint algorithms in linear optimization. SIAM J. Optim. 2004,15(1):101–128. 10.1137/S1052623403423114MathSciNetView ArticleMATHGoogle Scholar
 Amini K, Haseli A: A new proximity function generating the best known iteration bounds for both largeupdate and smallupdate interiorpoint methods. ANZIAM J. 2007,49(2):259–270. 10.1017/S1446181100012827MathSciNetView ArticleMATHGoogle Scholar
 Bai YQ, Lesaja G, Roos C, Wang GQ, El Ghami M: A class of largeupdate and smallupdate primaldual interiorpoint algorithms for linear optimization. J. Optim. Theory Appl. 2008,138(3):341–359. 10.1007/s109570089389zMathSciNetView ArticleMATHGoogle Scholar
 Bai YQ, Roos C: A polynomialtime algorithm for linear optimization based a new simple kernel function. Optim. Methods Softw. 2003,18(6):631–646. 10.1080/10556780310001639735MathSciNetView ArticleMATHGoogle Scholar
 Bai YQ, Roos C, El Ghami M: A primaldual interiorpoint method for linear optimization based a new proximity function. Optim. Methods Softw. 2002,17(6):985–1008. 10.1080/1055678021000090024MathSciNetView ArticleMATHGoogle Scholar
 Bai YQ, Wang GQ, Roos C: Primaldual interiorpoint algorithms for secondorder cone optimization based on kernel functions. Nonlinear Anal. 2009,70(10):3584–3602. 10.1016/j.na.2008.07.016MathSciNetView ArticleMATHGoogle Scholar
 Cai XZ, Wang GQ, Zhang ZH: Complexity analysis and numerical implementation of primaldual interiorpoint methods for convex quadratic optimization based on a finite barrier. Numer. Algorithms 2013,62(2):289–306. 10.1007/s110750129581yMathSciNetView ArticleMATHGoogle Scholar
 Chi XN, Liu SY: An infeasibleinteriorpoint predictorcorrector algorithm for the secondorder cone program. Acta Math. Sci. 2008,28(3):551–559. 10.1016/S02529602(08)600582MathSciNetView ArticleMATHGoogle Scholar
 Cho GM: Primaldual interiorpoint method based on a new barrier function. J. Nonlinear Convex Anal. 2011,12(3):611–624.MathSciNetMATHGoogle Scholar
 Cho GM: An interiorpoint algorithm for linear optimization based on a new barrier function. Appl. Math. Comput. 2011,218(2):386–395. 10.1016/j.amc.2011.05.075MathSciNetView ArticleMATHGoogle Scholar
 El Ghami M, Bai YQ, Roos C: Kernelfunction based algorithms for semidefinite optimization. RAIRO Oper. Res. 2009,43(2):189–199. 10.1051/ro/2009011MathSciNetView ArticleMATHGoogle Scholar
 El Ghami M, Guennounb ZA, Bouali S, Steihaug T: Interiorpoint methods for linear optimization based on a kernel function with a trigonometric barrier term. J. Comput. Appl. Math. 2012,236(15):3613–3623. 10.1016/j.cam.2011.05.036MathSciNetView ArticleMATHGoogle Scholar
 El Ghami M, Ivanov ID, Roos C, Steihag T: A polynomialtime algorithm for LO based on generalized logarithmic barrier functions. Int. J. Appl. Math. 2008,21(1):99–115.MathSciNetMATHGoogle Scholar
 El Ghami M, Roos C: Generic primaldual interior point methods based on a new kernel function. RAIRO Oper. Res. 2008,42(2):199–213. 10.1051/ro:2008009MathSciNetView ArticleMATHGoogle Scholar
 Gu G, Zangiabadi M, Roos C: Full NesterovTodd step infeasible interiorpoint method for symmetric optimization. Eur. J. Oper. Res. 2011,214(3):473–484. 10.1016/j.ejor.2011.02.022MathSciNetView ArticleMATHGoogle Scholar
 Peyghamia MR, Hafshejani SF, Shirvani L: Complexity of interiorpoint methods for linear optimization based on a new trigonometric kernel function. J. Comput. Appl. Math. 2014,255(1):74–85.MathSciNetView ArticleMATHGoogle Scholar
 Tang JY, He GP, Fang L: A new kernel function and its related properties for secondorder cone optimization. Pac. J. Optim. 2012,8(2):321–346.MathSciNetMATHGoogle Scholar
 Vieira, MVC: Jordan algebraic approach to symmetric optimization. PhD thesis, Electrical Engineering, Mathematics and Computer Science, Delft University of Technology, The Netherlands (2007)Google Scholar
 Wang GQ, Bai YQ, Roos C: Primaldual interiorpoint algorithms for semidefinite optimization based on a simple kernel function. J. Math. Model. Algorithms 2005,4(4):409–433. 10.1007/s1085200535613MathSciNetView ArticleMATHGoogle Scholar
 Wang GQ, Bai YQ: A new primaldual pathfollowing interiorpoint algorithm for semidefinite optimization. J. Math. Anal. Appl. 2009,353(1):339–349. 10.1016/j.jmaa.2008.12.016MathSciNetView ArticleMATHGoogle Scholar
 Wang GQ, Bai YQ: A class of polynomial interiorpoint algorithms for the Cartesian Pmatrix linear complementarity problem over symmetric cones. J. Optim. Theory Appl. 2012,152(3):739–772. 10.1007/s1095701199388MathSciNetView ArticleMATHGoogle Scholar
 Wang GQ, Bai YQ: A new full NesterovTodd step primaldual pathfollowing interiorpoint algorithm for symmetric optimization. J. Optim. Theory Appl. 2012,154(3):966–985. 10.1007/s109570120013xMathSciNetView ArticleMATHGoogle Scholar
 Toh KC: An inexact primaldual path following algorithm for convex quadratic SDP. Math. Program. 2008,112(1):221–254.MathSciNetView ArticleMATHGoogle Scholar
 Li L, Toh KC: A polynomialtime inexact interiorpoint method for convex quadratic symmetric cone programming. J. MathforInd. 2010, 2B: 199–212.MathSciNetMATHGoogle Scholar
 Wang GQ, Zhu DT: A unified kernel function approach to primaldual interiorpoint algorithms for convex quadratic SDO. Numer. Algorithms 2011,57(4):537–558. 10.1007/s1107501094443MathSciNetView ArticleMATHGoogle Scholar
 Wang GQ, Zhang ZH, Zhu DT: On extending primaldual interiorpoint method for linear optimization to convex quadratic symmetric cone optimization. Numer. Funct. Anal. Optim. 2012,34(5):576–603.MathSciNetView ArticleMATHGoogle Scholar
 Wang GQ, Yu CJ, Teo KL: A full NesterovTodd step feasible interiorpoint method for convex quadratic optimization over symmetric cone. Appl. Math. Comput. 2013,221(15):329–343.MathSciNetView ArticleMATHGoogle Scholar
 Bai YQ, Zhang LP: A fullNewton step interiorpoint algorithm for symmetric cone convex quadratic optimization. J. Ind. Manag. Optim. 2011,7(4):891–906.MathSciNetView ArticleMATHGoogle Scholar
 Achache M: A full NesterovTodd step feasible primaldual interior point algorithm for convex quadratic semidefinite optimization. Appl. Math. Comput. 2014,231(1):581–590.MathSciNetView ArticleGoogle Scholar
 Faybusovich L: Euclidean Jordan algebras and interiorpoint algorithms. Positivity 1997,1(4):331–357. 10.1023/A:1009701824047MathSciNetView ArticleMATHGoogle Scholar
 Schmieta SH, Alizadeh F: Extension of primaldual interiorpoint algorithms to symmetric cones. Math. Program. 2003,96(3):409–438. 10.1007/s101070030380zMathSciNetView ArticleMATHGoogle Scholar
 Nesterov YE, Todd MJ: Selfscaled barriers and interiorpoint methods for convex programming. Math. Oper. Res. 1997,22(1):1–42. 10.1287/moor.22.1.1MathSciNetView ArticleMATHGoogle Scholar
 Nesterov YE, Todd MJ: Primaldual interiorpoint methods for selfscaled cones. SIAM J. Optim. 1998,8(2):324–364. 10.1137/S1052623495290209MathSciNetView ArticleMATHGoogle Scholar
 Faybusovich L: A Jordanalgebraic approach to potentialreduction algorithms. Math. Z. 2002,239(1):117–129. 10.1007/s002090100286MathSciNetView ArticleMATHGoogle Scholar
 Faraut J, Korányi A: Analysis on Symmetric Cone. Oxford University Press, New York; 1994.MATHGoogle Scholar
 Baes M: Convexity and differentiability properties of spectral functions and spectral mappings on Euclidean Jordan algebras. Linear Algebra Appl. 2007,422(2–3):664–700. 10.1016/j.laa.2006.11.025MathSciNetView ArticleMATHGoogle Scholar
 Korányi A: Monotone functions on formally real Jordan algebras. Math. Ann. 1984,269(1):73–76. 10.1007/BF01455996MathSciNetView ArticleMATHGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.