Skip to main content

# Large-update interior point algorithm for ${P}_{\ast }$-linear complementarity problem

## Abstract

It is well known that each barrier function defines an interior point algorithm and each barrier function is determined by its univariate kernel function. In this paper we present a new large-update primal-dual interior point algorithm for solving ${P}_{\ast }$-linear complementarity problem (LCP) based on a parametric version of the kernel function in (Bai et al. in SIAM J. Optim. 13:766-782, 2003). We show that the algorithm has $\mathcal{O}\left(\left(1+2\kappa \right){\left(logp\right)}^{3}\sqrt{n}\left(logn\right)log\frac{n{\mu }^{0}}{ϵ}\right)$ iteration complexity, where p is a barrier function parameter and κ is the handicap of the matrix. This is the best known complexity result for such a method.

MSC:90C33, 90C51.

## 1 Introduction

In this paper, we consider the standard form of LCP as follows:

$s=Mx+q,\phantom{\rule{2em}{0ex}}xs=0,\phantom{\rule{1em}{0ex}}x\ge 0,s\ge 0,$
(1.1)

where $x,s,q\in {\mathbf{R}}^{n}$ and $M\in {\mathbf{R}}^{n×n}$ is a ${P}_{\ast }$-matrix and xs denotes the componentwise (Hadamard) product of the vectors x and s.

The matrix M is a ${P}_{\ast }$-matrix if it is a ${P}_{\ast }\left(\kappa \right)$-matrix for some $\kappa \ge 0$, where ${P}_{\ast }\left(\kappa \right):=\left\{M\in {\mathbf{R}}^{n×n}\mid \left(1+4\kappa \right){\sum }_{i\in {I}_{+}\left(\xi \right)}{\left[\xi \right]}_{i}{\left[M\xi \right]}_{i}+{\sum }_{i\in {I}_{-}\left(\xi \right)}{\left[\xi \right]}_{i}{\left[M\xi \right]}_{i}\ge 0,\mathrm{\forall }\xi \in {\mathbf{R}}^{n}\right\}$, where ${\left[M\xi \right]}_{i}$ denotes the i th component of the vector , ${I}_{+}\left(\xi \right)=\left\{1\le i\le n:{\xi }_{i}{\left[M\xi \right]}_{i}\ge 0\right\}$, ${I}_{-}\left(\xi \right)=\left\{1\le i\le n:{\xi }_{i}{\left[M\xi \right]}_{i}<0\right\}$. Note that M is a ${P}_{\ast }\left(0\right)$-matrix if and only if M is positive semidefinite.

In the following, we give some examples for ${P}_{\ast }\left(\kappa \right)$-matrices.

Example 1.1 The matrix

${\mathbf{M}}_{\mathbf{1}}=\left(\begin{array}{cc}0& 2+4\kappa \\ -2& 0\end{array}\right)$

is ${P}_{\ast }\left(\kappa \right)$, for all $\kappa \ge 0$. Indeed, since $x\left({\mathbf{M}}_{\mathbf{1}}x\right)={\left(\left(2+4\kappa \right){x}_{1}{x}_{2},-2{x}_{1}{x}_{2}\right)}^{T}$, for ${x}_{1}{x}_{2}>0$, ${I}_{+}=\left\{1\right\}$ and ${I}_{-}=\left\{2\right\}$. Hence $\left(1+4\kappa \right)\left(2+4\kappa \right){x}_{1}{x}_{2}-2{x}_{1}{x}_{2}=4\kappa {x}_{1}{x}_{2}\left(3+4\kappa \right)\ge 0$ for all $\kappa \ge 0$. For ${x}_{1}{x}_{2}<0$, ${I}_{+}=\left\{2\right\}$ and ${I}_{-}=\left\{1\right\}$. Then $\left(1+4\kappa \right)\left(-2{x}_{1}{x}_{2}\right)+\left(2+4\kappa \right){x}_{1}{x}_{2}=-4\kappa {x}_{1}{x}_{2}\ge 0$, for all $\kappa \ge 0$. Thus, ${\mathbf{M}}_{\mathbf{1}}$ is ${P}_{\ast }\left(\kappa \right)$, for all $\kappa \ge 0$.

Example 1.2 For $c\ge 0$, the matrix

${\mathbf{M}}_{\mathbf{2}}=\left(\begin{array}{ccc}0& 1+4\kappa & 0\\ -1& 0& 0\\ 0& 0& c\end{array}\right)$

is ${P}_{\ast }\left(\kappa \right)$, for all $\kappa \ge 0$. $x\left({\mathbf{M}}_{\mathbf{2}}x\right)={\left(\left(1+4\kappa \right){x}_{1}{x}_{2},-{x}_{1}{x}_{2},c{x}_{3}^{2}\right)}^{T}$. If ${x}_{1}{x}_{2}>0$, ${I}_{+}=\left\{1,3\right\}$ and ${I}_{-}=\left\{2\right\}$. Hence ${\left(1+4\kappa \right)}^{2}{x}_{1}{x}_{2}+c\left(1+4\kappa \right){x}_{3}^{2}-{x}_{1}{x}_{2}=8\kappa \left(1+2\kappa \right){x}_{1}{x}_{2}+c\left(1+4\kappa \right){x}_{3}^{2}\ge 0$, for all $\kappa \ge 0$. If ${x}_{1}{x}_{2}<0$, ${I}_{+}=\left\{2,3\right\}$ and ${I}_{-}=\left\{1\right\}$. $\left(1+4\kappa \right)\left(-{x}_{1}{x}_{2}+c{x}_{3}^{2}\right)+\left(1+4\kappa \right){x}_{1}{x}_{2}=c\left(1+4\kappa \right){x}_{3}^{2}\ge 0$, for all $\kappa \ge 0$. Thus, ${\mathbf{M}}_{\mathbf{2}}$ is ${P}_{\ast }\left(\kappa \right)$, for all $\kappa \ge 0$.

Linear complementarity problems (LCPs) have many applications in science, economics and engineering. LCPs include linear and quadratic programming, fixed point problems and sets of piecewise-linear equations, bimatrix equilibrium points and variational inequalities [1]. A large-update interior point method (IPM) is one of the most efficient numerical methods for various optimization problems.

Peng-Roos-Terlaky [24] proposed new variants of interior point methods (IPMs) based on self-regular barrier functions and achieved so far the best known complexity for large-update methods with a specific self-regular barrier function. Bai-Ghami-Roos [5] proposed a new primal-dual IPM for linear optimization (LO) problem based on eligible barrier functions and a unified scheme for analyzing the algorithm based on four conditions of the kernel function and Bai-Lesaja-Roos [6] generalized to ${P}_{\ast }\left(\kappa \right)$-LCP. Cho [7] and Cho-Kim [8] extended the complexity analysis for LO problem to ${P}_{\ast }\left(\kappa \right)$-LCP. Amini-Haseli [9] and Amini-Peyghami [10] introduced generalized versions of the kernel functions in [5] and improved the complexity results for large-update methods for LO and ${P}_{\ast }\left(\kappa \right)$-LCP, respectively. Recently, Lesaja-Roos [11] proposed a unified analysis of the IPM for ${P}_{\ast }\left(\kappa \right)$-LCP based on the class of eligible barrier functions which was first introduced by Bai-Ghami-Roos [5] for LO. Wang-Bai [12] generalized interior point algorithm for LO to P-matrix LCP over symmetric cones based on the same kernel function. Wang-Lesaja [13] extended the full NT-step infeasible IPM for symmetric cone LO to the Cartesian ${P}_{\ast }\left(\kappa \right)$-symmetric cone LCP and the algorithm is small-update method.

The most challenging question in this research area is whether or not there exists a kernel function for which the iteration bound for large-update method is the same as or even better than currently best known bound for such methods [5]. Bai-Ghami-Roos [14] proposed a new efficient large-update IPM for LO based on a barrier-type function which is not a barrier function in the usual sense since it has finite value at the boundary of the feasible region. Despite this, they obtained the best known iteration bound. Ghami [15] proposed various versions of interior point algorithms based on kernel functions and showed that the kernel function in [14] seems promising through numerical tests. Wang-Bai [16] proposed a generalized version of the kernel function in [14] which has a parameter in the growth term for ${P}_{\ast }\left(\kappa \right)$-horizontal LCPs and obtained the best known complexity bound when the parameter value equals 1, i.e. the same kernel function in [14]. This implies that the parameter in the growth term does not improve the complexity of the algorithm except 1.

Motivated by this, we introduce a parameter in the barrier term of the kernel function in [14] and obtained the best known complexity result for large-update methods for all parameters. Note that when the parameter in the barrier term grows, the barrier function grows faster when t approaches zero.

The paper is organized as follows: In Section 2, we introduce the generic IPM and give some examples of ${P}_{\ast }\left(\kappa \right)$-matrices. In Section 3, we introduce a class of barrier functions and propose a new large-update interior point algorithm for ${P}_{\ast }$-LCP. In Section 4, we derive the complexity results for the algorithm. Finally concluding remarks are given in Section 5.

Throughout the paper, ${\mathbf{R}}_{+}^{n}$ and ${\mathbf{R}}_{++}^{n}$ denote the set of n-dimensional nonnegative vectors and positive vectors, respectively. For $x\in {\mathbf{R}}^{n}$, ${\left[x\right]}_{i}$ and ${x}_{1}$ denote the i th component and the smallest component of the vector x, respectively. We denote by D the diagonal matrix from a vector d and e, the n-dimensional vector of ones. The index set $I:=\left\{1,2,\dots ,n\right\}$. For ${g}_{1}\left(t\right),{g}_{2}\left(t\right):{\mathbf{R}}_{++}\to {\mathbf{R}}_{++}$, ${g}_{1}\left(t\right)=\mathcal{O}\left({g}_{2}\left(t\right)\right)$ if there exists a positive constant ${c}_{1}$ such that ${g}_{1}\left(t\right)\le {c}_{1}{g}_{2}\left(t\right)$, for all $t>0$, and ${g}_{1}\left(t\right)=\mathrm{\Theta }\left({g}_{2}\left(t\right)\right)$ if there exist positive constants ${c}_{2}$ and ${c}_{3}$ such that ${c}_{2}{g}_{2}\left(t\right)\le {g}_{1}\left(t\right)\le {c}_{3}{g}_{2}\left(t\right)$, for all $t>0$. For $a\in \mathbf{R}$, $⌊a⌋:=max\left\{m\in \mathbf{Z}\mid m\le a\right\}$ and $⌈a⌉:=min\left\{n\in \mathbf{Z}\mid n\ge a\right\}$, where Z is the set of integers. log denotes the natural logarithm.

## 2 Preliminaries

In this section we recall some basic concepts and introduce the generic interior point algorithm. The basic idea of IPMs for LCP is to replace the second equation in (1.1) by the parameterized equation $xs=\mu \mathbf{e}$, $\mu >0$. Now we consider the following system:

$s=Mx+q,\phantom{\rule{2em}{0ex}}xs=\mu \mathbf{e},\phantom{\rule{1em}{0ex}}x>0,s>0.$
(2.1)

Without loss of generality, we assume that (1.1) satisfies the interior point condition (IPC), i.e., there exists a $\left({x}^{0},{s}^{0}\right)>0$ such that ${s}^{0}=M{x}^{0}+q$ [17]. Since M is a ${P}_{\ast }\left(\kappa \right)$-matrix for some $\kappa \ge 0$ and (1.1) satisfies the IPC, the system (2.1) has a unique solution for $\mu >0$. We denote the solution of (2.1) by $\left(x\left(\mu \right),s\left(\mu \right)\right)$ which is called the μ-center for $\mu >0$. The set of μ-centers is called the central path of (1.1). Since the limit of the μ-centers satisfies (1.1) as $\mu \to 0$, it yields the solution for (1.1) [17]. IPMs follow the central path approximately and approach the solution of (1.1) as $\mu \to 0$.

For given $\left(x,s\right):=\left({x}^{0},{s}^{0}\right)$, by applying Newton’s method to the system (2.1), we have the following Newton system:

$-M\mathrm{\Delta }x+\mathrm{\Delta }s=0,\phantom{\rule{2em}{0ex}}S\mathrm{\Delta }x+X\mathrm{\Delta }s=\mu \mathbf{e}-xs,$
(2.2)

where $X:=diag\left(x\right)$ and $S:=diag\left(s\right)$. By Lemma 4.1 of [17], the system (2.2) has a unique solution $\left(\mathrm{\Delta }x,\mathrm{\Delta }s\right)$. By taking a step along the search direction $\left(\mathrm{\Delta }x,\mathrm{\Delta }s\right)$, one constructs a new iteration $\left({x}_{+},{s}_{+}\right)$, where

${x}_{+}:=x+\alpha \mathrm{\Delta }x,\phantom{\rule{2em}{0ex}}{s}_{+}:=s+\alpha \mathrm{\Delta }s,$

for some step size $\alpha \ge 0$. For notational convenience, we define the following:

$v:=\sqrt{\frac{xs}{\mu }},\phantom{\rule{2em}{0ex}}d:=\sqrt{\frac{x}{s}},\phantom{\rule{2em}{0ex}}{d}_{x}:=\frac{v\mathrm{\Delta }x}{x},\phantom{\rule{2em}{0ex}}{d}_{s}:=\frac{v\mathrm{\Delta }s}{s}.$
(2.3)

Using (2.3), we can rewrite the system (2.2) as follows:

$-\overline{M}{d}_{x}+{d}_{s}=0,\phantom{\rule{2em}{0ex}}{d}_{x}+{d}_{s}={v}^{-1}-v,$
(2.4)

where $\overline{M}:=DMD$ and $D:=diag\left(d\right)$. Note that the right-hand side of the second equation in (2.4) equals the negative gradient of the classical logarithmic barrier function ${\mathrm{\Psi }}_{l}\left(v\right)$, i.e.,

${d}_{x}+{d}_{s}=-\mathrm{\nabla }{\mathrm{\Psi }}_{l}\left(v\right),$
(2.5)

where ${\mathrm{\Psi }}_{l}\left(v\right):={\sum }_{i=1}^{n}{\psi }_{l}\left({v}_{i}\right)$ and ${\psi }_{l}\left(t\right):=\frac{{t}^{2}-1}{2}-logt$, for $t>0$. We call ${\psi }_{l}$ the kernel function of the classical logarithmic barrier function ${\mathrm{\Psi }}_{l}\left(v\right)$.

Assuming that we are given a strictly feasible point $\left(x,s\right)$ which is in a τ-neighborhood of the given μ-center, the generic interior point algorithm works as in Algorithm 1.

## 3 New algorithm

Consider a class of kernel functions $\psi \left(t\right)$ as follows:

$\psi \left(t\right):=\frac{\left(logp\right)\left({t}^{2}-1\right)}{2}+\frac{1}{\sigma }\left({p}^{\sigma \left(1-t\right)}-1\right),\phantom{\rule{1em}{0ex}}p\ge e,\sigma \ge 1,t>0.$
(3.1)

Then we have the first three derivatives of $\psi \left(t\right)$ as follows:

$\begin{array}{r}{\psi }^{\prime }\left(t\right)=\left(logp\right)\left(t-{p}^{\sigma \left(1-t\right)}\right),\\ {\psi }^{″}\left(t\right)=\left(logp\right)\left(1+\sigma \left(logp\right){p}^{\sigma \left(1-t\right)}\right),\\ {\psi }^{\left(3\right)}\left(t\right)=-{\sigma }^{2}{\left(logp\right)}^{3}{p}^{\sigma \left(1-t\right)}.\end{array}$
(3.2)

From (3.1) and (3.2), we have for $p\ge e$, $\sigma \ge 1$ and $t>0$,

${\psi }^{\prime }\left(1\right)=\psi \left(1\right)=0,\phantom{\rule{2em}{0ex}}{\psi }^{″}\left(t\right)>logp,\phantom{\rule{2em}{0ex}}\underset{t\to {0}^{+}}{lim}\psi \left(t\right)<\mathrm{\infty },\phantom{\rule{2em}{0ex}}\underset{t\to \mathrm{\infty }}{lim}\psi \left(t\right)=\mathrm{\infty }.$
(3.3)

From (3.3), $\psi \left(t\right)$ is strictly convex and has a minimum value 0 at $t=1$.

Lemma 3.1 Let $\psi \left(t\right)$ be defined as in (3.1). Then for $p\ge e$ and $\sigma \ge 1$,

1. (i)

$t{\psi }^{″}\left(t\right)+{\psi }^{\prime }\left(t\right)\ge 0$, $t>\frac{1}{\sigma logp}$,

2. (ii)

$t{\psi }^{″}\left(t\right)-{\psi }^{\prime }\left(t\right)\ge 0$, $t>0$,

3. (iii)

${\psi }^{\left(3\right)}\left(t\right)<0$, $t>0$.

Proof For (i), using (3.2) with $t>\frac{1}{\sigma logp}$, we have

$t{\psi }^{″}\left(t\right)+{\psi }^{\prime }\left(t\right)=2t\left(logp\right)+\left(\sigma \left(logp\right)t-1\right)\left(logp\right){p}^{\sigma \left(1-t\right)}>0.$

For (ii), $t{\psi }^{″}\left(t\right)-{\psi }^{\prime }\left(t\right)=\left(\sigma \left(logp\right)t+1\right)\left(logp\right){p}^{\sigma \left(1-t\right)}>0$, $t>0$.

For (iii), it is clear from (3.2). □

Corollary 3.2 Let ${t}_{1},{t}_{2}\ge \frac{1}{\sigma logp}$. By Lemma  3.1(i) and Lemma  2.1.2 in [4], we have $\psi \left(\sqrt{{t}_{1}{t}_{2}}\right)\le \frac{1}{2}\left(\psi \left({t}_{1}\right)+\psi \left({t}_{2}\right)\right)$, i.e., $\psi \left(t\right)$ is exponentially convex, for all $t>\frac{1}{\sigma logp}$.

Remark 3.3 By Lemma 3.1(ii), (iii), and Lemma 2.4 in [5], ${\psi }^{″}\left(t\right){\psi }^{\prime }\left(\beta t\right)-\beta {\psi }^{\prime }\left(t\right){\psi }^{″}\left(\beta t\right)>0$, $t>0$, $\beta >1$.

Lemma 3.4 For $\psi \left(t\right)$ as in (3.1), we have

$\frac{logp}{2}{\left(t-1\right)}^{2}\le \psi \left(t\right)\le \frac{1}{2logp}{\left({\psi }^{\prime }\left(t\right)\right)}^{2},\phantom{\rule{1em}{0ex}}p\ge e,\sigma \ge 1,t>0.$

Proof Using (3.3), we have

$\psi \left(t\right)={\int }_{1}^{t}{\int }_{1}^{\xi }{\psi }^{″}\left(\zeta \right)\phantom{\rule{0.2em}{0ex}}d\zeta \phantom{\rule{0.2em}{0ex}}d\xi \ge {\int }_{1}^{t}{\int }_{1}^{\xi }\left(logp\right)\phantom{\rule{0.2em}{0ex}}d\zeta \phantom{\rule{0.2em}{0ex}}d\xi =\frac{logp}{2}{\left(t-1\right)}^{2}$

and

$\begin{array}{rcl}\psi \left(t\right)& =& {\int }_{1}^{t}{\int }_{1}^{\xi }{\psi }^{″}\left(\zeta \right)\phantom{\rule{0.2em}{0ex}}d\zeta \phantom{\rule{0.2em}{0ex}}d\xi \le \frac{1}{logp}{\int }_{1}^{t}{\int }_{1}^{\xi }{\psi }^{″}\left(\xi \right){\psi }^{″}\left(\zeta \right)\phantom{\rule{0.2em}{0ex}}d\zeta \phantom{\rule{0.2em}{0ex}}d\xi \\ =& \frac{1}{logp}{\int }_{1}^{t}{\psi }^{″}\left(\xi \right){\psi }^{\prime }\left(\xi \right)\phantom{\rule{0.2em}{0ex}}d\xi =\frac{1}{logp}{\int }_{1}^{t}{\psi }^{\prime }\left(\xi \right)\phantom{\rule{0.2em}{0ex}}d{\psi }^{\prime }\left(\xi \right)=\frac{1}{2logp}{\left({\psi }^{\prime }\left(t\right)\right)}^{2}.\end{array}$

□

Lemma 3.5 Let $\varrho :\left[0,\mathrm{\infty }\right)\to \left[1,\mathrm{\infty }\right)$ be the inverse function of $\psi \left(t\right)$ for $t\ge 1$. Then we have

$\varrho \left(u\right)\le 1+\sqrt{\frac{2u}{logp}},\phantom{\rule{1em}{0ex}}p\ge e,u\ge 0.$

Proof Let $u:=\psi \left(t\right)$ for $t\ge 1$. Then $\varrho \left(u\right)=t$. Using the first inequality in Lemma 3.4, we have $u=\psi \left(t\right)\ge \frac{logp}{2}{\left(t-1\right)}^{2}$. Then we have $t=\varrho \left(u\right)\le 1+\sqrt{\frac{2u}{logp}}$. □

Lemma 3.6 Let $\rho :\left[0,\mathrm{\infty }\right)\to \left(0,1\right]$ be the inverse function of $-\frac{1}{2}{\psi }^{\prime }\left(t\right)$ for $0. Then $\rho \left(z\right)\ge \frac{\sigma \left(logp\right)-log\left(\frac{2z}{logp}+1\right)}{\sigma logp}$, $p>e$, $\sigma \ge 1$, $z\ge 0$.

Proof Let $z:=-\frac{1}{2}{\psi }^{\prime }\left(t\right)$, for $0. By the definition of ρ, $\rho \left(z\right)=t$, for $z\ge 0$ and $2z=-\psi \left(t\right)$. By (3.2) and $0, ${p}^{\sigma \left(1-t\right)}=\frac{2z}{logp}+t\le \frac{2z}{logp}+1$. Hence $t=\rho \left(z\right)\ge \frac{\sigma \left(logp\right)-log\left(\frac{2z}{logp}+1\right)}{\sigma logp}$. □

Define for $\psi \left(t\right)$ as in (3.1) and $v\in {\mathbf{R}}_{++}^{n}$,

$\mathrm{\Psi }\left(v\right):=\mathrm{\Psi }\left(x,s,\mu \right):=\sum _{i=1}^{n}\psi \left({\left[v\right]}_{i}\right).$
(3.4)

Since $\mathrm{\Psi }\left(v\right)$ is strictly convex and minimal at $v=\mathbf{e}$, we have

$\mathrm{\Psi }\left(v\right)=0\phantom{\rule{1em}{0ex}}⇔\phantom{\rule{1em}{0ex}}v=\mathbf{e}\phantom{\rule{1em}{0ex}}⇔\phantom{\rule{1em}{0ex}}x=x\left(\mu \right),\phantom{\rule{2em}{0ex}}s=s\left(\mu \right).$

We use $\mathrm{\Psi }\left(v\right)$ as the proximity function to measure the distance between the current iteration and corresponding μ-center. Also, we define the norm-based proximity measure $\delta \left(v\right)$ as follows:

$\delta \left(v\right):=\frac{1}{2}\parallel \mathrm{\nabla }\mathrm{\Psi }\left(v\right)\parallel =\frac{1}{2}\parallel {d}_{x}+{d}_{s}\parallel .$
(3.5)

Note that $\delta \left(v\right)=0⇔v=\mathbf{e}⇔\mathrm{\Psi }\left(v\right)=0$. In this paper, we replace the right-hand side of (2.5), $-\mathrm{\nabla }{\mathrm{\Psi }}_{l}\left(v\right)$, by $-\mathrm{\nabla }\mathrm{\Psi }\left(v\right)$ as in (3.4). This defines a new search direction and proximity function.

In the following we compute upper bound of proximity function during an outer iteration.

Lemma 3.7 Let $\delta \left(v\right)$ and $\mathrm{\Psi }\left(v\right)$ be defined as in (3.5) and (3.4), respectively. Then $\delta \left(v\right)\ge \sqrt{\frac{logp}{2}\mathrm{\Psi }\left(v\right)}$, $v\in {\mathbf{R}}_{++}^{n}$, $p\ge e$.

Proof Using (3.5) and the second inequality of Lemma 3.4,

${\delta }^{2}\left(v\right)=\frac{1}{4}\sum _{i=1}^{n}{\left({\psi }^{\prime }\left({\left[v\right]}_{i}\right)\right)}^{2}\ge \frac{logp}{2}\sum _{i=1}^{n}\psi \left({\left[v\right]}_{i}\right)=\frac{logp}{2}\mathrm{\Psi }\left(v\right).$

Hence we have $\delta \left(v\right)\ge \sqrt{\frac{logp}{2}\mathrm{\Psi }\left(v\right)}$. □

Lemma 3.8 Let $L\ge 8$ and $\mathrm{\Psi }\left(v\right)\le L$. If $\sigma \ge 1+2log\left(1+L\right)$ and $p\ge e$, then ${v}_{1}\ge \frac{3}{2\sigma logp}$.

Proof If ${v}_{1}\ge 1$, then ${v}_{1}\ge 1>\frac{3}{2\sigma logp}$. Suppose that ${v}_{1}<1$. Let $t:={v}_{1}$. Since $\mathrm{\Psi }\left(v\right)\le L$, $\psi \left(t\right)\le L$, i.e., $\frac{\left(logp\right)\left({t}^{2}-1\right)}{2}+\frac{1}{\sigma }\left({p}^{\sigma \left(1-t\right)}-1\right)\le L$. This implies that $\frac{1}{\sigma }\left({p}^{\sigma \left(1-t\right)}-1\right)\le L+\frac{\left(logp\right)\left(1-{t}^{2}\right)}{2}\le L+\frac{logp}{2}$, ${p}^{1-\sigma t}\le \frac{1+\sigma \left(L+\frac{logp}{2}\right)}{{p}^{\sigma -1}}$. Let ${g}_{1}\left(\sigma \right):=\frac{1+\sigma \left(L+\frac{logp}{2}\right)}{{p}^{\sigma -1}}$. Then ${g}_{1}\left(\sigma \right)$ is monotone decreasing in σ. Since $\sigma \ge 1+2log\left(1+L\right)$ and $p\ge e$, ${p}^{1-\sigma t}\le \frac{1+\left(1+2log\left(1+L\right)\right)\left(L+\frac{logp}{2}\right)}{{p}^{2log\left(1+L\right)}}\le \frac{1+\left(1+2log\left(1+L\right)\right)\left(L+\frac{logp}{2}\right)}{{e}^{2log\left(1+L\right)}}=\frac{1+\left(1+2log\left(1+L\right)\right)\left(L+\frac{logp}{2}\right)}{{\left(1+L\right)}^{2}}$.

Let ${g}_{2}\left(L\right):=\frac{1+\left(1+2log\left(1+L\right)\right)\left(L+\frac{logp}{2}\right)}{{\left(1+L\right)}^{2}}$. Then ${p}^{1-\sigma t}\le {g}_{2}\left(L\right)$ and hence $t\ge \frac{1}{\sigma logp}log\frac{p}{{g}_{2}\left(L\right)}$. Let ${g}_{3}\left(p,L\right):=\frac{p}{{g}_{2}\left(L\right)}:=\frac{p{\left(1+L\right)}^{2}}{1+\left(1+2log\left(1+L\right)\right)\left(L+\frac{logp}{2}\right)}$. ${g}_{3}\left(p,L\right)$ is monotone increasing in p and L, respectively. Since $p\ge e$ and $L\ge 8$, ${g}_{3}\left(p,L\right)\ge \frac{81e}{1+8.5\left(1+4log3\right)}\ge 4.6$. Hence $t\ge \frac{log\left(4.6\right)}{\sigma logp}\ge \frac{3}{2\sigma logp}$. □

## 4 Complexity analysis

In this section we give the iteration complexity of the algorithm for large-update methods. For complexity analysis of the algorithm we follow a similar framework to the one defined in [5] for LO problems. In the following we compute the bound of the growth of the barrier function during an outer iteration of the algorithm.

Using Lemma 3.1(ii), (iii), and Theorem 3.2 in [5], we obtain the following lemma.

Lemma 4.1 Let ϱ be defined as in Lemma  3.5. If $\mathrm{\Psi }\left(v\right)\le \tau$, then

$\mathrm{\Psi }\left(\beta v\right)\le n\psi \left(\beta \varrho \left(\frac{\tau }{n}\right)\right),\phantom{\rule{1em}{0ex}}v\in {\mathbf{R}}_{++}^{n},\beta \ge 1.$
(4.1)

In the following we compute the upper bounds of $\mathrm{\Psi }\left(v\right)$ when we update the barrier parameter μ.

Theorem 4.2 Let $0<\theta <1$ and ${v}_{+}:=\frac{v}{\sqrt{1-\theta }}$. If $\mathrm{\Psi }\left(v\right)\le \tau$, then we have

$\mathrm{\Psi }\left({v}_{+}\right)\le \frac{\theta \left(logp\right)n+2\sqrt{2\tau \left(logp\right)n}+2\tau }{2\left(1-\theta \right)},\phantom{\rule{1em}{0ex}}p\ge e.$

Proof Define ${\psi }_{b}\left(t\right):=\frac{1}{\sigma }\left({p}^{\sigma \left(1-t\right)}-1\right)$. Then $\psi \left(t\right)=\frac{\left(logp\right)\left({t}^{2}-1\right)}{2}+{\psi }_{b}\left(t\right)$ and ${\psi }_{b}^{\prime }\left(t\right)=-\left(logp\right){p}^{\sigma \left(1-t\right)}<0$ and ${\psi }_{b}\left(1\right)=0$. Hence, we have

$\psi \left(t\right)\le \frac{\left(logp\right)\left({t}^{2}-1\right)}{2},\phantom{\rule{1em}{0ex}}t\ge 1,p\ge e.$
(4.2)

Using Lemma 4.1, (4.2), and Lemma 3.5, we have

$\begin{array}{rcl}\mathrm{\Psi }\left({v}_{+}\right)& \le & n\psi \left(\frac{1}{\sqrt{1-\theta }}\varrho \left(\frac{\tau }{n}\right)\right)\le \frac{nlogp}{2}\left(\frac{{\varrho }^{2}\left(\frac{\tau }{n}\right)}{1-\theta }-1\right)\\ \le & \frac{nlogp}{2}\left(\frac{{\left(1+\sqrt{\frac{2\tau }{nlogp}}\right)}^{2}}{1-\theta }-1\right)\\ =& \frac{\theta \left(logp\right)n+2\sqrt{2\tau \left(logp\right)n}+2\tau }{2\left(1-\theta \right)}.\end{array}$

□

Define

${\overline{\mathrm{\Psi }}}_{0}:=\frac{\theta \left(logp\right)n+2\sqrt{2\tau \left(logp\right)n}+2\tau }{2\left(1-\theta \right)}.$
(4.3)

We will use ${\overline{\mathrm{\Psi }}}_{0}$ for the upper bounds of $\mathrm{\Psi }\left(v\right)$ for large-update methods.

Remark 4.3 Let $L:={\overline{\mathrm{\Psi }}}_{0}$. Without loss of generality, we can assume that $L\ge 8$. Indeed, when $p\ge e$, $\tau \ge 1$ and $n\ge 2$, $L\ge \frac{\theta n+2\sqrt{2\tau n}+2\tau }{2\left(1-\theta \right)}\ge \frac{\theta n+2\sqrt{2n}+2}{2\left(1-\theta \right)}\ge \frac{\theta +3}{1-\theta }>8$ if $\theta >\frac{5}{9}$. In the algorithm we take $\sigma :=1+2log\left(1+L\right)$.

Remark 4.4 For large-update method with $\tau =\mathcal{O}\left(n\right)$ and $\theta =\mathrm{\Theta }\left(1\right)$, we have ${\overline{\mathrm{\Psi }}}_{0}=\mathcal{O}\left(\left(logp\right)n\right)$ and $\sigma =\mathcal{O}\left(log\left(\left(logp\right)n\right)\right)$.

In the following we compute a default step size which keeps the iterates strictly feasible and decreases the value of barrier function during inner iterations. For fixed μ, if we take a step size α, then we have new iterations ${x}_{+}:=x+\alpha \mathrm{\Delta }x$, ${s}_{+}:=s+\alpha \mathrm{\Delta }s$. Using (2.3), we have

$\begin{array}{c}{x}_{+}=x\left(e+\alpha \frac{\mathrm{\Delta }x}{x}\right)=x\left(e+\alpha \frac{{d}_{x}}{v}\right)=\frac{x}{v}\left(v+\alpha {d}_{x}\right),\hfill \\ {s}_{+}=s\left(e+\alpha \frac{\mathrm{\Delta }s}{s}\right)=s\left(e+\alpha \frac{{d}_{s}}{v}\right)=\frac{s}{v}\left(v+\alpha {d}_{s}\right).\hfill \end{array}$

Thus we have

${v}_{+}:=\sqrt{\frac{{x}_{+}{s}_{+}}{\mu }}=\sqrt{\left(v+\alpha {d}_{x}\right)\left(v+\alpha {d}_{s}\right)}.$

Define for $\alpha >0$,

$f\left(\alpha \right):=\mathrm{\Psi }\left({v}_{+}\right)-\mathrm{\Psi }\left(v\right).$
(4.4)

Then $f\left(\alpha \right)$ is the difference of proximities between a new iteration and a current iteration for fixed μ. Assume that for some $\alpha \ge 0$, ${\left[v\right]}_{i}+\alpha {\left[{d}_{x}\right]}_{i}>\frac{1}{\sigma logq}$ and ${\left[v\right]}_{i}+\alpha {\left[{d}_{s}\right]}_{i}>\frac{1}{\sigma logq}$, for all $i\in I$. By Corollary 3.2,

$\mathrm{\Psi }\left({v}_{+}\right)=\mathrm{\Psi }\left(\sqrt{\left(v+\alpha {d}_{x}\right)\left(v+\alpha {d}_{s}\right)}\right)\le \frac{1}{2}\left(\mathrm{\Psi }\left(v+\alpha {d}_{x}\right)+\mathrm{\Psi }\left(v+\alpha {d}_{s}\right)\right).$

Define

${f}_{1}\left(\alpha \right):=\frac{1}{2}\left(\mathrm{\Psi }\left(v+\alpha {d}_{x}\right)+\mathrm{\Psi }\left(v+\alpha {d}_{s}\right)\right)-\mathrm{\Psi }\left(v\right).$

Then we have $f\left(\alpha \right)\le {f}_{1}\left(\alpha \right)$ and $f\left(0\right)={f}_{1}\left(0\right)=0$. By taking the derivative of ${f}_{1}\left(\alpha \right)$ with respect to α, we have

${f}_{1}^{\prime }\left(\alpha \right)=\frac{1}{2}\sum _{i=1}^{n}\left({\psi }^{\prime }\left({\left[v\right]}_{i}+\alpha {\left[{d}_{x}\right]}_{i}\right){\left[{d}_{x}\right]}_{i}+{\psi }^{\prime }\left({\left[v\right]}_{i}+\alpha {\left[{d}_{s}\right]}_{i}\right){\left[{d}_{s}\right]}_{i}\right).$

Using (2.5) and (3.5), we have

${f}_{1}^{\prime }\left(0\right)=\frac{1}{2}\mathrm{\nabla }\mathrm{\Psi }{\left(v\right)}^{T}\left({d}_{x}+{d}_{s}\right)=-\frac{1}{2}\mathrm{\nabla }\mathrm{\Psi }{\left(v\right)}^{T}\mathrm{\nabla }\mathrm{\Psi }\left(v\right)=-2{\delta }^{2}\left(v\right).$
(4.5)

Differentiating ${f}_{1}^{\prime }\left(\alpha \right)$ with respect to α, we have

${f}_{1}^{″}\left(\alpha \right)=\frac{1}{2}\sum _{i=1}^{n}\left({\psi }^{″}\left({\left[v\right]}_{i}+\alpha {\left[{d}_{x}\right]}_{i}\right){\left[{d}_{x}\right]}_{i}^{2}+{\psi }^{″}\left({\left[v\right]}_{i}+\alpha {\left[{d}_{s}\right]}_{i}\right){\left[{d}_{s}\right]}_{i}^{2}\right).$

Since ${f}_{1}^{″}\left(\alpha \right)>0$, ${f}_{1}\left(\alpha \right)$ is strictly convex in α unless ${d}_{x}={d}_{s}=0$. Since M is a ${P}_{\ast }\left(\kappa \right)$-matrix and $M\mathrm{\Delta }x=\mathrm{\Delta }s$ from (2.2), for $\mathrm{\Delta }x\in {\mathbf{R}}^{n}$,

$\left(1+4\kappa \right)\sum _{i\in {I}_{+}}{\left[\mathrm{\Delta }x\right]}_{i}{\left[\mathrm{\Delta }s\right]}_{i}+\sum _{i\in {I}_{-}}{\left[\mathrm{\Delta }x\right]}_{i}{\left[\mathrm{\Delta }s\right]}_{i}\ge 0,$

where ${I}_{+}=\left\{i\in I:{\left[\mathrm{\Delta }x\right]}_{i}{\left[\mathrm{\Delta }s\right]}_{i}\ge 0\right\}$ and ${I}_{-}=I-{I}_{+}$. Since ${d}_{x}{d}_{s}=\frac{{v}^{2}\mathrm{\Delta }x\mathrm{\Delta }s}{xs}=\frac{\mathrm{\Delta }x\mathrm{\Delta }s}{\mu }$ and $\mu >0$, we have

$\left(1+4\kappa \right)\sum _{i\in {I}_{+}}{\left[{d}_{x}\right]}_{i}{\left[{d}_{s}\right]}_{i}+\sum _{i\in {I}_{-}}{\left[{d}_{x}\right]}_{i}{\left[{d}_{s}\right]}_{i}\ge 0.$

For notational convenience, we denote $\delta :=\delta \left(v\right)$, $\mathrm{\Psi }:=\mathrm{\Psi }\left(v\right)$, ${\sigma }_{+}:={\sum }_{i\in {I}_{+}}{\left[{d}_{x}\right]}_{i}{\left[{d}_{s}\right]}_{i}$ and ${\sigma }_{-}:=-{\sum }_{i\in {I}_{-}}{\left[{d}_{x}\right]}_{i}{\left[{d}_{s}\right]}_{i}$. To estimate the bound for $\parallel {d}_{x}\parallel$ and $\parallel {d}_{s}\parallel$, we need the following technical lemma.

Lemma 4.5 (Modification of Lemma 4.1 in [8])

${\sigma }_{+}\le {\delta }^{2}$ and ${\sigma }_{-}\le \left(1+4\kappa \right){\delta }^{2}$.

Lemma 4.6 (Modification of Lemma 4.2 in [8])

${\sum }_{i=1}^{n}\left({\left[{d}_{x}\right]}_{i}^{2}+{\left[{d}_{s}\right]}_{i}^{2}\right)\le 4\left(1+2\kappa \right){\delta }^{2}$, $\parallel {d}_{x}\parallel \le 2\delta \sqrt{1+2\kappa }$ and $\parallel {d}_{s}\parallel \le 2\delta \sqrt{1+2\kappa }$.

Using (4.5) and Lemma 4.6, we have the following lemma.

Lemma 4.7 (Modification of Lemma 4.3 in [8])

Let δ be defined as in (3.5). Then we have

${f}_{1}^{″}\left(\alpha \right)\le 2\left(1+2\kappa \right){\delta }^{2}{\psi }^{″}\left({v}_{1}-2\alpha \delta \sqrt{1+2\kappa }\right).$

Using (4.5) and Lemma 4.7, we have the following lemma.

Lemma 4.8 (Modification of Lemma 4.4 in [8])

If the step size α satisfies the inequality

$-{\psi }^{\prime }\left({v}_{1}-2\alpha \delta \sqrt{1+2\kappa }\right)+{\psi }^{\prime }\left({v}_{1}\right)\le \frac{2\delta }{\sqrt{1+2\kappa }},$
(4.6)

then ${f}_{1}^{\prime }\left(\alpha \right)\le 0$.

Lemma 4.9 (Modification of Lemma 4.5 in [8])

Let ρ be defined as in Lemma  3.6 and $a:=1+\frac{1}{\sqrt{1+2\kappa }}$. Then, in the worst case, the largest step size α satisfying (4.6) is given by

$\overline{\alpha }:=\frac{1}{2\delta \sqrt{1+2\kappa }}\left(\rho \left(\delta \right)-\rho \left(a\delta \right)\right).$
(4.7)

Lemma 4.10 (Modification of Lemma 4.6 in [8])

Let ρ and $\overline{\alpha }$ be defined as in Lemma  4.9. Then

$\overline{\alpha }\ge \frac{1}{\left(1+2\kappa \right){\psi }^{″}\left(\rho \left(a\delta \right)\right)}.$

Let

$\stackrel{˜}{\alpha }:=\frac{1}{\left(1+2\kappa \right){\psi }^{″}\left(\rho \left(a\delta \right)\right)}.$
(4.8)

Letting $t:=\rho \left(a\delta \right)$, we have $0 and $-{\psi }^{\prime }\left(t\right)=2a\delta$. By (3.2) and $1\le a\le 2$, we have

$\left(logp\right){p}^{\sigma \left(1-t\right)}\le \left(logp\right)+4\delta .$
(4.9)

By (4.8) with (3.2) and $\sigma \ge 1$, (4.9), Lemma 3.7 with $\mathrm{\Psi }\ge \tau \ge 1$ and $p\ge e$,

$\begin{array}{rcl}\stackrel{˜}{\alpha }& \ge & \frac{1}{\sigma \left(1+2\kappa \right)\left(logp\right)\left(1+{p}^{\sigma \left(1-t\right)}logp\right)}\ge \frac{1}{\sigma \left(1+2\kappa \right)\left(logp\right)\left(1+4\delta +logp\right)}\\ \ge & \frac{1}{\sigma \delta \left(1+2\kappa \right)\left(logp\right)\left(\sqrt{2}+4+\sqrt{2}logp\right)}\ge \frac{1}{2\sigma \delta \left(1+2\kappa \right){\left(logp\right)}^{2}\left(2+\sqrt{2}\right)}.\end{array}$

Define the default step size $\stackrel{ˆ}{\alpha }$ as follows:

$\stackrel{ˆ}{\alpha }:=\frac{1}{2\sigma \delta \left(1+2\kappa \right){\left(logp\right)}^{2}\left(2+\sqrt{2}\right)}.$
(4.10)

Using Lemma 4.6, Lemma 3.8, and (4.10), ${\left[v\right]}_{i}+\overline{\alpha }{\left[{d}_{x}\right]}_{i}$${v}_{1}-2\overline{\alpha }\sqrt{1+2\kappa }\delta$$\frac{3}{2\sigma logp}-\frac{1}{\sigma \sqrt{1+2\kappa }{\left(logp\right)}^{2}\left(2+\sqrt{2}\right)}$$\left(\frac{3}{2}-\frac{1}{\left(2+\sqrt{2}\right)logp}\right)\frac{1}{\sigma logp}$$\left(\frac{3}{2}-\frac{1}{2+\sqrt{2}}\right)\frac{1}{\sigma logp}$ > $\frac{1}{\sigma logp}$, for all $i\in I$. In the same way, ${\left[v\right]}_{i}+\overline{\alpha }{\left[{d}_{x}\right]}_{i}>\frac{1}{\sigma logp}$, for all $i\in I$. Hence we can use the exponential convexity of $\psi \left(t\right)$.

Lemma 4.11 (Lemma 1.3.3 in [4])

Let a function h be twice differentiable and convex with $h\left(0\right)=0$, ${h}^{\prime }\left(0\right)<0$ and let h attain its (global) minimum at ${t}^{\ast }>0$. If ${h}^{″}\left(t\right)$ is monotonically increasing on $t\in \left[0,{t}^{\ast }\right]$, then

$h\left(t\right)\le \frac{t{h}^{\prime }\left(0\right)}{2},\phantom{\rule{1em}{0ex}}0\le t\le {t}^{\ast }.$

Using ${f}_{1}\left(0\right)=0$, (4.5), ${\psi }^{‴}<0$, and Lemma 4.11, we have the following lemma.

Lemma 4.12 Let $\overline{\alpha }$ be defined as in (4.7). If the step size α is such that $\alpha \le \overline{\alpha }$, then

$f\left(\alpha \right)\le -\alpha {\delta }^{2}.$

Using Lemma 4.12, (4.10), and Lemma 3.7, we have the following theorem.

Theorem 4.13 For $\stackrel{ˆ}{\alpha }$ as in (4.10), $f\left(\stackrel{ˆ}{\alpha }\right)\le -\frac{{\mathrm{\Psi }}^{\frac{1}{2}}}{4\left(1+\sqrt{2}\right)\left(1+2\kappa \right)\sigma {\left(logp\right)}^{\frac{3}{2}}}$.

Proposition 4.14 (Proposition 1.3.2 in [4])

Let ${t}_{0},{t}_{1},\dots ,{t}_{\overline{K}}$ be a sequence of positive numbers such that

${t}_{k+1}\le {t}_{k}-\lambda {t}_{k}^{1-\gamma },\phantom{\rule{1em}{0ex}}k=0,1,\dots ,\overline{K}-1,$

where $\lambda >0$ and $0<\gamma \le 1$. Then $\overline{K}\le ⌊\frac{{t}_{0}^{\gamma }}{\lambda \gamma }⌋$.

We define the value of $\mathrm{\Psi }\left(v\right)$ after the μ-update as ${\mathrm{\Psi }}_{0}$ and the subsequent values in the same outer iteration are denoted as ${\mathrm{\Psi }}_{k}$, $k=1,2,\dots$ . Then we have ${\mathrm{\Psi }}_{0}\le {\overline{\mathrm{\Psi }}}_{0}$. If we let K be the number of inner iterations per an outer iteration, then we have ${\mathrm{\Psi }}_{K-1}>\tau$, $0\le {\mathrm{\Psi }}_{K}\le \tau$. In the following theorem we give the bound for the total number of iterations.

Theorem 4.15 Let a ${P}_{\ast }\left(\kappa \right)$-LCP be given. If $\tau \ge 1$, then the total number of iterations of the algorithm to get an ϵ-approximate solution is bounded by

$⌈\frac{8\left(1+\sqrt{2}\right)\left(1+2\kappa \right)\sigma {\left(logp\right)}^{\frac{3}{2}}{\overline{\mathrm{\Psi }}}_{0}^{\frac{1}{2}}}{\theta }log\frac{n{\mu }^{0}}{ϵ}⌉.$
(4.11)

Proof Using Proposition 4.14 with $\gamma :=\frac{1}{2}$ and $\lambda :=\frac{1}{4\left(1+\sqrt{2}\right)\left(1+2\kappa \right)\sigma {\left(logp\right)}^{\frac{3}{2}}}$, we obtain the number of inner iterations $4\sqrt{2}\left(2+\sqrt{2}\right)\left(1+2\kappa \right)\sigma {\left(logp\right)}^{\frac{3}{2}}{\overline{\mathrm{\Psi }}}_{0}^{\frac{1}{2}}$. If the central path parameter μ has the initial value ${\mu }^{0}>0$ and is updated by multiplying $1-\theta$, $0<\theta <1$, then after at most $⌈\frac{1}{\theta }log\frac{n{\mu }^{0}}{ϵ}⌉$ iterations we have $n\mu <ϵ$ [18]. For the total number of iterations, we multiply the number of inner iterations by that of the outer iterations. Hence the total number of iterations is bounded by $⌈\frac{8\left(1+\sqrt{2}\right)\left(1+2\kappa \right)\sigma {\left(logp\right)}^{\frac{3}{2}}{\overline{\mathrm{\Psi }}}_{0}^{\frac{1}{2}}}{\theta }log\frac{n{\mu }^{0}}{ϵ}⌉$. □

## 5 Concluding remarks

Wang-Bai [16] defined a parametric version of the kernel function in [14], the parameter is in the growth term of the kernel function, and generalized the algorithm for LO to ${P}_{\ast }\left(\kappa \right)$-LCPs based on this kernel function. Ghami-Roos-Steihaug [19] extended the algorithm for LO to semidefinite optimization based on the kernel function in [16]. However, they obtained the best known complexity bound for large-update methods when the kernel function takes the form in [14].

Motivated by this, we consider a parametric version of the kernel function in [14] with parameters in the barrier term of the kernel function. For large-update methods, by taking $\tau =\mathcal{O}\left(n\right)$ and $\theta =\mathrm{\Theta }\left(1\right)$, we obtained $\mathcal{O}\left(\left(1+2\kappa \right){\left(logp\right)}^{3}\sqrt{n}\left(logn\right)log\frac{n{\mu }^{0}}{ϵ}\right)$, for $p\ge e$, which is the best known complexity bound for such a method.

Further research will be concerned with a numerical test and extension to general problems.

## References

1. Cottle RW, Pang JS, Stone RE: The Linear Complementarity Problem. Academic Press, San Diego; 1992.

2. Peng J, Roos C, Terlaky T: Self-regular functions and new search directions for linear and semidefinite optimization. Math. Program. 2002, 93: 129-171. 10.1007/s101070200296

3. Peng J, Roos C, Terlaky T: Primal-dual interior-point methods for second-order conic optimization based on self-regular proximities. SIAM J. Optim. 2002, 13: 179-203. 10.1137/S1052623401383236

4. Peng J, Roos C, Terlaky T: Self-Regularity: A New Paradigm for Primal-Dual Interior-Point Algorithms. Princeton University Press, Princeton; 2002.

5. Bai YQ, Ghami ME, Roos C: A comparative study of kernel functions for primal-dual interior-point algorithms in linear optimization. SIAM J. Optim. 2004, 15: 101-128. 10.1137/S1052623403423114

6. Bai YQ, Lesaja G, Roos C:A new class of polynomial interior-point algorithms for ${P}_{\ast }\left(\kappa \right)$ linear complementarity problems. Pac. J. Optim. 2008, 4: 19-41.

7. Cho GM:A new large-update interior point algorithm for ${P}_{\ast }\left(\kappa \right)$ linear complementarity problems. J. Comput. Appl. Math. 2008, 216: 256-278.

8. Cho GM, Kim MK:A new large-update interior point algorithm for ${P}_{\ast }\left(\kappa \right)$ LCPs based on kernel functions. Appl. Math. Comput. 2006, 182: 1169-1183. 10.1016/j.amc.2006.04.060

9. Amini K, Haseli A: A new proximity function generating the best known iteration bounds for both large-update and small-update interior-point methods. ANZIAM J. 2007, 49: 259-270. 10.1017/S1446181100012827

10. Amini K, Peyghami MR:Exploring complexity of large update interior-point methods for ${P}_{\ast }\left(\kappa \right)$ linear complementarity problem based on kernel function. Appl. Math. Comput. 2009, 207: 501-513. 10.1016/j.amc.2008.11.002

11. Lesaja G, Roos C:Unified analysis of kernel-based interior-point methods for ${P}_{\ast }\left(\kappa \right)$-linear complementarity problems. SIAM J. Optim. 2010, 20: 3014-3039. 10.1137/090766735

12. Wang GQ, Bai YQ: A class of polynomial interior-point algorithms for the Cartesian P -matrix linear complementarity problem over symmetric cones. J. Optim. Theory Appl. 2012, 152: 739-772. 10.1007/s10957-011-9938-8

13. Wang GQ, Lesaja G:Full Nesterov-Todd step feasible interior-point method for the Cartesian ${P}_{\ast }\left(\kappa \right)$-SCLCP. Optim. Methods Softw. 2013, 28: 600-618. 10.1080/10556788.2013.781600

14. Bai YQ, Ghami ME, Roos C: A new efficient large-update primal-dual interior-point method based on a finite barrier. SIAM J. Optim. 2003, 13: 766-782.

15. Ghami, ME: New primal-dual interior-point methods based on kernel functions. Dissertation, Delft University of Technology (2005)

16. Wang GQ, Bai YQ:Polynomial interior-point algorithms for ${P}_{\ast }\left(\kappa \right)$ horizontal linear complementarity problem. J. Comput. Appl. Math. 2009, 233: 248-263. 10.1016/j.cam.2009.07.014

17. Kojima M, Megiddo N, Noma T, Yoshise A Lecture Notes in Computer Science 538. In A Unified Approach to Interior Point Algorithms for Linear Complementarity Problems. Springer, Berlin; 1991.

18. Roos C, Terlaky T, Vial JP: Theory and Algorithms for Linear Optimization: An Interior Approach. Wiley, Chichester; 1997.

19. Ghami ME, Roos C, Steihaug T: A generic primal-dual interior point method for semidefinite optimization based on a new class of kernel functions. Optim. Methods Softw. 2010, 25: 387-403. 10.1080/10556780903239048

Download references

## Acknowledgements

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2013R1A1A2010094).

## Author information

Authors

### Corresponding author

Correspondence to Gyeong-Mi Cho.

## Additional information

### Competing interests

The author declares that they have no competing interests.

## Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

## Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

## About this article

### Cite this article

Cho, GM. Large-update interior point algorithm for ${P}_{\ast }$-linear complementarity problem. J Inequal Appl 2014, 363 (2014). https://doi.org/10.1186/1029-242X-2014-363

Download citation

• Received:

• Accepted:

• Published:

• DOI: https://doi.org/10.1186/1029-242X-2014-363

### Keywords

• interior point method
• barrier function
• complexity
• linear complementarity problem