Skip to main content

On complexity of a new Mehrotra-type interior point algorithm for \(P_{*}(\kappa )\) linear complementarity problems

Abstract

In this paper, a variant of Mehrotra-type predictor–corrector algorithm is proposed for \(P_{*}(\kappa )\) linear complementarity problems. In this algorithm, a safeguard step is used to avoid small step sizes and a new corrector direction is adopted. The algorithm has polynomial iteration complexity and the iteration bound is \(O ((14\kappa +11)\sqrt{(1+4 \kappa )(1+2\kappa )}~n\log \frac{(x^{0})^{T}s^{0}}{\varepsilon } )\). Some numerical results are reported as well.

1 Introduction

A \(P_{*}(\kappa )\) linear complementarity problem (LCP) is to find vectors \(x \in \mathbb{R}^{n}\) and \(s \in \mathbb{R}^{n}\) such that

$$ \begin{aligned} &Mx+q =s, \\ &x^{T}s =0, \\ &x,s \geq 0, \end{aligned} $$
(1)

where \(M\in \mathbb{R}^{n\times n}\) is a \(P_{*}(\kappa )\) matrix and \(q\in \mathbb{R}^{n}\).

LCPs are closely associated with linear programming and quadratic programming. It is well known that a differentiable convex quadratic programming can be formulated as a monotone LCP by exploiting the first-order optimality conditions, and vice versa [1]. Transportation planning and game theory also have a close connection with LCPs [2, 3].

Interior point algorithms for LCPs have been widely studied in the last few decades [4]. In 1991, Kojima et al. [5] extended all the previously known results to \(P_{*}(\kappa )\) LCPs and unified the theory of LCPs from the view point of interior point methods. Since then, many interior point algorithms for linear programming have been extended to \(P_{*}(\kappa )\) LCPs. Illés and Nagy [6], and Miao [7] studied the Mizuno-Todd-Ye type interior point algorithms on \(P_{*}(\kappa )\) LCPs. Cho [8], and Cho and Kim [9] proposed two interior point polynomial algorithms based on kernel functions for \(P_{*}(\kappa )\) LCPs. Using a new updating strategy of the centering parameter, Liu et al. [10] extended two Mehrotra-type predictor–corrector algorithms to sufficient LCPs.

Predictor–corrector algorithms are practical interior point methods for linear programming, quadratic programming and LCPs, variants of which have become backbones of several optimization software packages [11, 12]. Mehrotra [13] proposed a predictor–corrector algorithm for linear programming, in which the coefficient matrices in both predictor steps and corrector steps are the same, and it needs less computational efforts than other methods. After that, several variants of this algorithm have been studied. Jarre and Wechs [14] studied a new primal–dual interior point method in which the search directions are based on corrector directions of Mehrotra-type algorithm. Zhang and Zhang [15] presented a second order Mehrotra-type predictor–corrector algorithm without updating the central path. Salahi et al. [16] found that in a variant of Mehrotra’s algorithm, in order to keep iterates in a large neighborhood of the central path, some steps are very small. To avoid small or zero steps, Salahi et al. introduced some safeguards in the corrector steps [16] and a new criterion on predictor step sizes [17], moreover, they proved that the two algorithms have polynomial complexity and practical efficiency. Infeasible versions of Mehrotra-type algorithms are studied by Liu et al. [18], and Yang et al. [19].

In this paper, a new variant of Mehrotra-type predictor–corrector algorithm is proposed for \(P_{*}(\kappa )\) LCPs. In this algorithm, the corrector step is different from other Mehrotra-type predictor–corrector algorithms [16, 17]. If (Δx, Δs) is the search direction of a \(P_{*}(\kappa )\) LCP, then \(\Delta x^{T}\Delta s \neq 0 \), while \(\Delta x^{T}\Delta {s}=0 \) if (Δx, Δs) is the search direction of linear programming. So the analysis is different from that in linear programming. If an iteration \((x,s)\) takes a step along the corrector direction \((\Delta x,\Delta s)\), the parameter \(\mu _{g}(\alpha )=(1-\alpha )\mu _{g}+\alpha \mu -\alpha \alpha _{a}^{2} \frac{{\Delta x^{a}}^{T}\Delta s^{a}}{n}+{\alpha ^{2}}\frac{{\Delta x}^{T}\Delta s}{n}\), where \(\alpha _{a}\) is the predictor step size and \((\Delta x^{a},\Delta s ^{a})\) is the predictor search direction. In order to reduce the dual gap \(x^{T}s\), the corrector step size α should be chosen such that \(\mu _{g}(\alpha ) \leq \mu _{g}\), that is to say, α should have an upper bound. If α is larger than a given threshold, then the threshold is chosen as the corrector step size. The iteration complexity of the new algorithm is \(O ((14\kappa +11)\sqrt{(1+4 \kappa )(1+2\kappa )}~n\log \frac{(x^{0})^{T}s^{0}}{\varepsilon } )\), which is analogous to that of linear programming.

This paper is organized as follows. In Sect. 2, a new Mehrotra-type predictor–corrector algorithm for \(P_{*}(\kappa )\) LCPs is introduced. In Sect. 3, the polynomial iteration complexity is provided. Some illustrative numerical results are reported in Sect. 4. Finally, some concluding remarks are given in Sect. 5.

We use the following notations throughout the paper: \(\|\cdot \|\) denotes the 2-norm of vectors, e is the n-dimensional vector of ones. For any two n-dimensional vectors x and s, xs is the componentwise product of the two vectors. We also use the following notations.

$$\begin{aligned}& I=\{1,2,\ldots,n\}, \qquad I_{+}=\bigl\{ i\in I|\Delta x_{i}^{a}\Delta s_{i}^{a} \geq 0 \bigr\} ,\qquad I_{-}=\bigl\{ i\in I|\Delta x_{i}^{a} \Delta s_{i}^{a}< 0\bigr\} , \\& F_{+}=\bigl\{ (x,s)\in \mathbb{R}^{n}\times \mathbb{R}^{n}|s=Mx+q, (x,s) \geq 0\bigr\} , \\& F_{++}=\bigl\{ (x,s)\in \mathbb{R}^{n}\times \mathbb{R}^{n}|s=Mx+q, (x,s)>0 \bigr\} . \end{aligned}$$

2 Mehrotra-type predictor–corrector algorithm

A matrix \(M\in \mathbb{R}^{n\times n}\) is a \(P_{*}(\kappa )\) matrix [5] if

$$\begin{aligned} (1+4\kappa )\sum_{j\in J_{+}}x_{j}(Mx)_{j}+ \sum_{j \in J_{-}}x_{j}(Mx)_{j}\geq 0, \quad \forall x\in \mathbb{R}^{n}, \end{aligned}$$
(2)

or

$$\begin{aligned} x^{T}Mx\geq -4\kappa \sum_{j\in J_{+}}x_{j}(Mx)_{j}, \ \ \forall x\in \mathbb{R}^{n}, \end{aligned}$$
(3)

where \(\kappa \geq 0\), \(J_{+}=\{j|j\in I, x_{j}(Mx)_{j}\geq 0\}\), and \(J_{-}=\{j|j\in I, x_{j}(Mx)_{j}<0\}\).

Noting that the positive semi-definite matrix is a \(P_{*}(0)\) matrix, thus the class of \(P_{*}(\kappa )\) matrices includes the class of positive semi-definite matrices. Other properties of \(P_{*}(\kappa )\) matrices can be found in [5].

Without loss of generality [5], we assume that the \(P_{*}(\kappa )\) LCP (1) satisfies the interior point condition, that is, there exists a point \((x^{0},s^{0})\) such that

$$ s^{0}=Mx^{0}+q,\quad x^{0}>0, s^{0}>0. $$

To find an approximate solution of (1), the following parameterized system is established:

$$ \begin{aligned} &Mx+q =s, \\ &xs =\mu e, \\ &x,s \geq 0, \end{aligned} $$
(4)

where \(\mu >0\).

If the \(P_{*}(\kappa )\) LCP (1) satisfies the interior point condition, then the system (4) has a unique solution for any \(\mu > 0\). For a given μ, the solution, denoted by \((x(\mu ),s(\mu ))\), is called a μ-center of (1). The set of all μ-centers gives the central path of (1). A primal–dual interior point algorithm follows the central path \(\{(x(\mu ),s(\mu ))|\mu >0\}\) approximately and approaches the solution of (1) as μ goes to zero.

Most interior point algorithms work in the neighborhood \(N_{\infty } ^{-}(\gamma )\) defined by

$$ N_{\infty }^{-}(\gamma )=\bigl\{ (x,s)\in {F}_{++} | x_{i}s_{i}\geq \gamma \mu _{g},\forall i\in I \bigr\} , $$

where \(\gamma \in (0,1)\) is a constant independent of n and \(\mu _{g}=\frac{x^{T}s}{n}\).

Now, based on [16], a new variant of Mehrotra-type predictor–corrector algorithm for \(P_{*}(\kappa )\) LCPs will be described.

The predictor search direction (\(\Delta x^{a}\), \(\Delta s^{a}\)) is determined by the following equations:

$$ \begin{aligned} &M\Delta x^{a}=\Delta s^{a}, \\ &s\Delta x^{a}+x\Delta s^{a}=-xs. \end{aligned} $$
(5)

The predictor step size \(\alpha _{a}\) is defined by

$$\begin{aligned} \alpha _{a}=\max \bigl\{ \alpha |0 \leq \bigl(x+\alpha _{a}\Delta x^{a},s+\alpha _{a}\Delta s^{a}\bigr),0 < \alpha \leq 1\bigr\} . \end{aligned}$$
(6)

However, this algorithm does not take a step along the direction \((\Delta x^{a},\Delta s^{a})\). Using information of the predictor step, the algorithm computes the corrector search direction \((\Delta x, \Delta s)\) by solving the following system:

$$ \begin{aligned}& M\Delta x= \Delta s, \\ &s\Delta x+x\Delta s =\mu e-xs-\alpha _{a}^{2} \Delta x^{a}\Delta s^{a}, \end{aligned} $$
(7)

where

$$ \mu = \biggl(\frac{g_{a}}{g} \biggr)^{2} \frac{g_{a}}{n} $$
(8)

with \(g_{a}=(x+\alpha _{a}\Delta x^{a})^{T}(s+\alpha _{a}\Delta s^{a})\) and \(g=x^{T}s\). The second equation of (7) is different from that in [16], where it is \(s\Delta x+x \Delta s =\mu e-xs-\Delta x^{a}\Delta s^{a}\).

The next iterate is denoted by

$$ x(\alpha )=x+\alpha \Delta x, s(\alpha )=s+\alpha \Delta s, $$

where α is the corrector step size defined by

$$\begin{aligned} \alpha =\max \bigl\{ \alpha |\bigl(x(\alpha ),s(\alpha )\bigr)\in N_{\infty }^{-}( \gamma ), 0 < \alpha \leq 1\bigr\} . \end{aligned}$$
(9)

In order to avoid small steps, we combine Mehrotra’s updating strategy of the centering parameter with a safeguard step at each iteration. The new Mehrotra-type predictor–corrector algorithm for \(P_{*}(\kappa )\) LCPs is stated as Algorithm 1.

Algorithm 1
figure a

Mehrotra-type predictor–corrector algorithm for \(P_{*}(\kappa )\) LCPs

3 Complexity analysis

The polynomial complexity of Algorithm 1 will be discussed in this section. Firstly, we present three important lemmas which will be used in convergence analysis.

Lemma 1

([16])

Let \((\Delta x^{a},\Delta s^{a})\) be the solution of (5). Then

$$\begin{aligned}& \Delta x_{i}^{a}\Delta s_{i}^{a}\leq \frac{1}{4} x_{i}s_{i},\quad \forall i \in I_{+}, \\& -\Delta x_{i}^{a}\Delta s_{i}^{a}\leq \frac{1}{\alpha _{a}} \biggl(\frac{1}{ \alpha _{a}}-1 \biggr) x_{i}s_{i},\quad \forall i\in I_{-}, \\& \sum_{i\in I_{+}}\Delta x_{i}^{a} \Delta s_{i}^{a}\leq \frac{x ^{T}s}{4}. \end{aligned}$$

Lemma 2

Let M be a \(P_{*}(\kappa )\) matrix and \((\Delta x^{a},\Delta s^{a})\) be the solution of (5). Then

$$\begin{aligned}& \sum_{i\in I_{-}} \bigl\vert \Delta x_{i}^{a} \Delta s_{i}^{a} \bigr\vert \leq \frac{4 \kappa +1}{4}x^{T}s, \\& -\kappa x^{T}s\leq {\Delta x^{a}}^{T}\Delta s^{a}\leq \frac{x^{T}s}{4}. \end{aligned}$$

Proof

Since M is a \(P_{*}(\kappa )\) matrix, we have

$$ 0 \geq \sum_{i\in I_{-}}\Delta x_{i}^{a} \Delta s_{i}^{a}\geq -(4 \kappa +1) \sum _{i\in I_{+}}\Delta x_{i}^{a}\Delta s_{i}^{a} \geq -\frac{4\kappa +1}{4}x^{T}s, $$

where the last inequality is due to the third conclusion of Lemma 1. Furthermore, from (3), we get

$$ \bigl(\Delta x^{a}\bigr)^{T}\Delta s^{a}\geq -4 \kappa \sum_{i\in I_{+}} \Delta x_{i}^{a} \Delta s_{i}^{a}\geq -4\kappa \frac{x^{T}s}{4}=- \kappa x^{T}s. $$

This completes the proof. □

Lemma 3

Let M be a \(P_{*}(\kappa )\) matrix and \((\Delta x,\Delta s)\) be the solution of (7) with \(\mu >0\). Then

$$\begin{aligned}& \Vert \Delta x\Delta s \Vert \leq \sqrt{ \biggl(\frac{1}{4}+\kappa \biggr) \biggl(\frac{1}{2}+\kappa \biggr)} \bigl\Vert \mu (xs)^{-\frac{1}{2}}-(xs)^{ \frac{1}{2}}-\alpha _{a}^{2}(xs)^{-\frac{1}{2}} \Delta x^{a}\Delta s ^{a} \bigr\Vert ^{2}, \\& \sum_{i\in I_{+}}\Delta x_{i}\Delta s_{i}\leq \frac{1}{4} \bigl\Vert \mu (xs)^{- \frac{1}{2}}-(xs)^{\frac{1}{2}}- \alpha _{a}^{2}(xs)^{-\frac{1}{2}} \Delta x^{a}\Delta s^{a} \bigr\Vert ^{2}. \end{aligned}$$

Proof

The proof is similar to that of Lemma 8 in [6], and it is omitted. □

According to (8) and Lemma 2, it can be found that

$$\begin{aligned} \biggl(\frac{g_{a}}{g} \biggr)^{2}\frac{g_{a}}{n} =&\frac{[(1-\alpha _{a})x^{T}s+\alpha _{a}^{2}(\Delta x^{a})^{T}\Delta s^{a}]^{3}}{n(x ^{T}s)^{2}} \\ \leq &\frac{[(1-\alpha _{a})x^{T}s+\frac{1}{4}\alpha _{a}^{2}x^{T}s]^{3}}{n(x ^{T}s)^{2}} \\ =& \biggl(1-\alpha _{a}+\frac{1}{4}\alpha _{a}^{2} \biggr)^{3}\mu _{g} \\ \leq & \biggl(1-\frac{3}{4}\alpha _{a} \biggr)^{3} \mu _{g}. \end{aligned}$$
(10)

Consequently, \(\frac{\mu }{\mu _{g}} \leq (1-\frac{3}{4}\alpha _{a} )^{3}\) if \(\mu = (\frac{g_{a}}{g} )^{2}\frac{g _{a}}{n}\).

The following theorem shows that the predictor step size has a lower bound.

Theorem 4

Let the current iterate \((x,s)\in N_{\infty }^{-}(\gamma )\), \((\Delta x^{a},\Delta s^{a})\) be the solution of (5) and \(\alpha _{a}\) be the predictor step size. Then

$$\begin{aligned} \alpha _{a}\geq \sqrt{\frac{\gamma }{(4\kappa +1)n}}. \end{aligned}$$

Proof

According to (5), we get

$$ \bigl(x_{i}+\alpha \Delta x_{i}^{a}\bigr) \bigl(s_{i}+\alpha \Delta s_{i}^{a}\bigr)=(1- \alpha )x_{i}s_{i}+\alpha ^{2}\Delta x_{i}^{a}\Delta s_{i}^{a}. $$

Following from Lemma 2, we have \(\Delta x_{i}^{a} \Delta s_{i}^{a} \geq 0\) if \(i \in I_{+}\) and \(\Delta x_{i}^{a}\Delta s_{i}^{a} \geq -\frac{4\kappa +1}{4}x^{T}s\) if \(i \in I_{-}\). Therefore, for all \(i \in I\),

$$ \Delta x_{i}^{a}\Delta s_{i}^{a} \geq -\frac{4\kappa +1}{4}x^{T}s. $$

Noting that \((x,s)\in N_{\infty }^{-}(\gamma )\) implies \(x_{i}s_{i} \geq \gamma \mu _{g}\), we have

$$ (1-\alpha )x_{i}s_{i}+\alpha ^{2}\Delta x_{i}^{a}\Delta s_{i}^{a} \geq (1- \alpha )\gamma \frac{x^{T}s}{n}-\frac{4\kappa +1}{4}\alpha ^{2}x^{T}s. $$

Thus, to show \((x+\alpha \Delta x^{a},s+\alpha \Delta s^{a})\in {F} _{+}\), one has to prove the following inequality:

$$\begin{aligned} (4\kappa +1)n\alpha ^{2}+4\gamma \alpha -4\gamma \leq 0. \end{aligned}$$
(11)

Clearly, inequality (11) is true if

$$ 0 \leq \alpha \leq \frac{2\sqrt{\gamma ^{2}+(4\kappa +1)n\gamma }-2 \gamma }{(4\kappa +1)n}. $$

Therefore, the predictor step size \(\alpha _{a}\) satisfies

$$ \alpha _{a} \geq \frac{2\sqrt{\gamma ^{2}+(4\kappa +1)n\gamma }-2 \gamma }{(4\kappa +1)n}. $$

Since \(0<\frac{\gamma }{(4\kappa +1)n}<\frac{1}{4\kappa +1}\frac{1}{4 \kappa +5}\frac{1}{n}<\frac{1}{2}\), it can be found that

$$ \frac{2\sqrt{\gamma ^{2}+(4\kappa +1)n\gamma }-2\gamma }{(4\kappa +1)n} =\frac{2}{1+\sqrt{1+(4\kappa +1)\frac{n}{\gamma }}}\geq \sqrt{\frac{ \gamma }{(4\kappa +1)n}}. $$

This completes the proof. □

In what follows, the lower bound of \(\alpha _{a}\) is used as the default predictor step size with the notation

$$\begin{aligned} \alpha _{a}=\sqrt{\frac{\gamma }{(4\kappa +1)n}}. \end{aligned}$$
(12)

Lemma 5

If \((x,s)\in {N}_{\infty }^{-}(\gamma )\), and \((\Delta x,\Delta s)\) is the solution of (7) with \(\mu \geq 0\), then

$$\begin{aligned} \parallel \Delta x\Delta s\parallel \leq \omega \sqrt{ \biggl( \frac{1}{4}+ \kappa \biggr) \biggl(\frac{1}{2}+\kappa \biggr)} \end{aligned}$$

and

$$\begin{aligned} \Delta x^{T}\Delta s\leq \frac{1}{4}\omega , \end{aligned}$$

where \(\omega =\frac{n\mu ^{2}}{\gamma \mu _{g}} -2n\mu +\frac{n\mu \alpha _{a}^{2}(4\kappa +1)}{2\gamma }+\frac{\alpha _{a}^{4}+8\alpha _{a}^{2}+4\alpha _{a}^{2}(4\kappa +1)(1-\alpha _{a})+16}{16}n\mu _{g}\).

Proof

From Lemma 3, we have

$$ \Vert \Delta x\Delta s \Vert \leq \sqrt{ \biggl(\frac{1}{4}+\kappa \biggr) \biggl(\frac{1}{2}+\kappa \biggr)} \bigl\Vert \mu (xs)^{-\frac{1}{2}}-(xs)^{ \frac{1}{2}}-\alpha _{a}^{2}(xs)^{-\frac{1}{2}} \Delta x^{a}\Delta s ^{a} \bigr\Vert ^{2} $$

and

$$ \Delta x^{T}\Delta s\leq \frac{1}{4} \bigl\Vert \mu (xs)^{-\frac{1}{2}}-(xs)^{ \frac{1}{2}}-\alpha _{a}^{2}(xs)^{-\frac{1}{2}} \Delta x^{a}\Delta s ^{a} \bigr\Vert ^{2}, $$

where

$$\begin{aligned} & \bigl\Vert \mu (xs)^{-\frac{1}{2}}-(xs)^{\frac{1}{2}}-\alpha _{a}^{2}(xs)^{- \frac{1}{2}}\Delta x^{a}\Delta s^{a} \bigr\Vert ^{2} \\ &\quad =\mu ^{2}\sum_{i\in {I}}\frac{1}{x_{i}s_{i}}+ \sum_{i\in {I}}x_{i}s_{i}-2n\mu + \alpha _{a}^{4}\sum_{i \in {I}} \frac{ (\Delta x_{i}^{a}\Delta s_{i}^{a} )^{2}}{x _{i}s_{i}}\\ &\qquad {}-2\mu \alpha _{a}^{2}\sum_{i\in {I}} \frac{\Delta x_{i}^{a} \Delta s_{i}^{a}}{x_{i}s_{i}}+2\alpha _{a}^{2}\sum _{i\in {I}} \Delta x_{i}^{a}\Delta s_{i}^{a}. \end{aligned}$$

Since \(x_{i}s_{i}\geq \gamma \mu _{g}\) for all \(i\in {I}\), we have

$$ \mu ^{2}\sum_{i\in {I}}\frac{1}{x_{i}s_{i}}\leq \frac{n\mu ^{2}}{ \gamma \mu _{g}}. $$

From Lemma 1 and Lemma 2, it follows that

$$\begin{aligned} \sum_{i\in {I}}\frac{ (\Delta x_{i}^{a}\Delta s_{i}^{a} ) ^{2}}{x_{i}s_{i}} &=\sum _{i\in {I_{+}}}\frac{ (\Delta x _{i}^{a}\Delta s_{i}^{a} )^{2}}{x_{i}s_{i}}+\sum_{i\in {I_{-}}} \frac{ (\Delta x_{i}^{a}\Delta s_{i}^{a} ) ^{2}}{x_{i}s_{i}} \\ &\leq \sum_{i\in {I_{+}}}\frac{ (\frac{x_{i}s_{i}}{4} ) ^{2}}{x_{i}s_{i}}+\sum _{i\in {I_{-}}}\frac{-\Delta x_{i}^{a} \Delta s_{i}^{a}}{x_{i}s_{i}} \bigl(-\Delta x_{i}^{a} \Delta s_{i}^{a} \bigr) \\ &\leq \frac{x^{T}s}{16}+\frac{1-\alpha _{a}}{\alpha _{a}^{2}}\sum_{i\in {I_{-}}} \bigl\vert \Delta x_{i}^{a}\Delta s_{i}^{a} \bigr\vert \\ &\leq \frac{n\mu _{g}}{16}+\frac{(1-\alpha _{a})(4\kappa +1)n\mu _{g}}{4 \alpha _{a}^{2}}. \end{aligned}$$

Since \((x,s)\in {N}_{\infty }^{-}(\gamma )\) and \(\sum_{i\in I _{-}}|\Delta x_{i}^{a}\Delta s_{i}^{a}|\leq \frac{4\kappa +1}{4}x^{T}s\), we obtain

$$\begin{aligned} -2\mu \sum_{i\in {I}}\frac{\Delta x_{i}^{a}\Delta s_{i}^{a}}{x _{i}s_{i}}\leq 2\mu \sum _{i\in {I_{-}}}\frac{ \vert \Delta x_{i}^{a} \Delta s_{i}^{a} \vert }{x_{i}s_{i}} \leq \frac{2\mu }{\gamma \mu _{g}} \sum _{i\in {I_{-}}} \bigl\vert \Delta x_{i}^{a} \Delta s_{i}^{a} \bigr\vert \leq \frac{n \mu (4\kappa +1)}{2\gamma }. \end{aligned}$$

As \(\sum_{i\in I_{+}}\Delta x_{i}^{a}\Delta s_{i}^{a}\leq \frac{x ^{T}s}{4}\), we get

$$ 2\sum_{i\in {I}}\Delta x_{i}^{a} \Delta s_{i}^{a}\leq 2\sum_{i\in {I_{+}}} \Delta x_{i}^{a}\Delta s_{i}^{a}\leq \frac{n\mu _{g}}{2}. $$

Combining the above inequalities yields the result of this lemma. □

Corollary 6

If \((x,s)\in {N}_{\infty }^{-}(\gamma )\), \(\gamma \in (0,\frac{1}{4 \kappa +5} )\), and \((\Delta x,\Delta s)\) is the solution of (7) with \(\mu =\frac{\gamma }{1-\gamma }\mu _{g}\), then

$$ \Vert \Delta x\Delta s \Vert \leq pn\mu _{g},\qquad \Delta x^{T}\Delta s\leq qn\mu _{g}, $$

where \(p=\frac{14\kappa +11}{16}\sqrt{(1+4\kappa )(2+4\kappa )}\) and \(q=\frac{14\kappa +11}{16}\).

Proof

Since \(0<\gamma <\frac{1}{4\kappa +5}\leq \frac{1}{5}\), we have \(\frac{\gamma }{(1-\gamma )^{2}} \leq \frac{5}{16}\) and \(\frac{\alpha _{a}^{2}(4\kappa +1)-4\gamma }{2(1-\gamma )} \leq \frac{5\alpha _{a} ^{2}(4\kappa +1)}{8}\).

If \(\mu =\frac{\gamma }{1-\gamma }\mu _{g}\), then

$$\begin{aligned} \omega =& \biggl(\frac{\gamma }{(1-\gamma )^{2}}+\frac{\alpha _{a}^{2}(4 \kappa +1)-4\gamma }{2(1-\gamma )}+ \frac{\alpha _{a}^{4}+8\alpha _{a} ^{2}+4\alpha _{a}^{2}(4\kappa +1)(1-\alpha _{a})+16}{16} \biggr)n\mu _{g} \\ \leq & \biggl(\frac{5}{16}+\frac{5\alpha _{a}^{2}(4\kappa +1)}{8}+\frac{ \alpha _{a}^{4}+8\alpha _{a}^{2}+ 4\alpha _{a}^{2}(4\kappa +1)(1-\alpha _{a})+16}{16} \biggr)n \mu _{g} \\ \leq & \biggl(\frac{5}{16}+\frac{5(4\kappa +1)}{8}+\frac{1+8+4(4 \kappa +1)+16}{16} \biggr)n \mu _{g} =\frac{56\kappa +44}{16}n\mu _{g}, \end{aligned}$$

where the second inequality follows from \(0<\alpha _{a}\leq 1\).

From Lemma 5, it follows that

$$ \| \Delta x\Delta s\| \leq \omega \sqrt{ \biggl( \frac{1}{4}+ \kappa \biggr) \biggl(\frac{1}{2}+\kappa \biggr)} \leq pn\mu _{g}. $$

Similarly, the second result can easily be verified. □

For simplicity, the following notation is used in the rest of this paper:

$$ t=\max_{i\in {I}_{+}} \biggl\{ \frac{\Delta x_{i}^{a}\Delta s_{i}^{a}}{x _{i}s_{i}} \biggr\} . $$

Obviously, \(\Delta x_{i}^{a}\Delta s_{i}^{a} \leq tx_{i}s_{i}\) and \(t\leq \frac{1}{4}\).

Theorem 7

Let \((x,s)\in N_{\infty }^{-}(\gamma )\), \((\Delta x,\Delta s)\) be the solution of (7) with \(\mu =\frac{\gamma }{1- \gamma }\mu _{g}\) and α be the corrector step size. Then

$$ \alpha \geq \frac{7\gamma }{16pn}. $$

Proof

The corrector step size is the maximum α such that \(\alpha \in (0,1]\) and

$$ x_{i}(\alpha )s_{i}(\alpha )\geq \gamma \mu _{g}(\alpha ), \ \ \forall i\in I. $$

After a simple computation, we get

$$\begin{aligned} x_{i}(\alpha )s_{i}(\alpha ) =&x_{i}s_{i}+ \alpha \bigl(\mu -x_{i}s_{i}- \alpha _{a}^{2} \Delta x_{i}^{a}\Delta s_{i}^{a}\bigr)+ \alpha ^{2}\Delta x _{i}\Delta s_{i} \\ \geq &\bigl(1-\alpha -\alpha t \alpha _{a}^{2} \bigr)x_{i}s_{i}+\alpha \mu - \alpha ^{2}pn\mu _{g}, \end{aligned}$$

where the inequality is due to \(\Delta x_{i}^{a}\Delta s_{i}^{a} \leq tx_{i}s_{i}\) and \(\|\Delta x\Delta s\|\leq pn\mu _{g}\). Clearly, \(1+t\alpha _{a}^{2} \leq 1+\frac{1}{4}\alpha _{a}^{2}\). Thus, for \(0 \leq \alpha \leq \frac{1}{1+\frac{1}{4}\alpha _{a}^{2}}\), we have

$$ x_{i}(\alpha )s_{i}(\alpha ) \geq \bigl(1-\alpha -\alpha t \alpha _{a}^{2}\bigr) \gamma \mu _{g}+\alpha \mu - \alpha ^{2}pn\mu _{g}. $$

Applying Lemma 2 and Corollary 6 yields

$$\begin{aligned} \mu _{g}(\alpha ) =& \frac{(x+\alpha \Delta x)^{T}(s+\alpha \Delta s)}{n} \\ =&(1-\alpha )\mu _{g}+\alpha \mu -\alpha \alpha _{a}^{2} \frac{{\Delta x^{a}}^{T}\Delta s^{a}}{n}+\alpha ^{2}\frac{\Delta x^{T}\Delta s}{n} { } \\ \leq &(1-\alpha )\mu _{g}+\alpha \mu +\alpha \alpha _{a}^{2}\kappa \mu _{g}+\alpha ^{2}q\mu _{g}. \end{aligned}$$
(13)

To prove \(x_{i}(\alpha )s_{i}(\alpha )\geq \gamma \mu _{g}(\alpha )\), one has to show that

$$\begin{aligned} \bigl(1-\alpha -\alpha t \alpha _{a}^{2}\bigr)\gamma \mu _{g}+\alpha \mu -\alpha ^{2}pn\mu _{g} \geq \gamma \bigl[(1-\alpha )\mu _{g}+\alpha \mu +\alpha \alpha _{a}^{2}\kappa \mu _{g}+\alpha ^{2}q\mu _{g} \bigr]. \end{aligned}$$

If \(\mu =\frac{\gamma }{1-\gamma }\mu _{g}\), then the above inequality is equivalent to

$$\begin{aligned} \gamma -\gamma \kappa \alpha _{a}^{2}-t\gamma \alpha _{a}^{2}\geq \alpha (pn+q\gamma ). \end{aligned}$$
(14)

Since \(\alpha _{a}=\sqrt{\frac{\gamma }{(4\kappa +1)n}}\), \(\gamma <1\) and \(t\leq \frac{1}{4}\), we have

$$\begin{aligned} \gamma -\gamma \kappa \alpha _{a}^{2}-t\gamma \alpha _{a}^{2}\geq \gamma -\frac{4\kappa \gamma ^{2}+\gamma ^{2}}{4n(4\kappa +1)} \geq \gamma - \frac{4\kappa \gamma +\gamma }{4n(4\kappa +1)}\geq \frac{7}{8}\gamma . \end{aligned}$$

Noting that \(pn>q\gamma \), then inequality (14) is true if \(0 \leq \alpha \leq \frac{7\gamma }{16pn}\). We can conclude that the corrector step size α satisfies

$$ \alpha \geq \min \biggl\{ \frac{1}{1+\frac{1}{4}\alpha _{a}^{2}},\frac{7 \gamma }{16pn} \biggr\} = \frac{7\gamma }{16pn}. $$

 □

In Algorithm 1, the corrector step size α has an upper bound \(\alpha _{1}\), that is,

$$\begin{aligned} \alpha \leq \alpha _{1}=\frac{1-2\gamma -(1-\gamma )\kappa \alpha _{a} ^{2}}{2q(1-\gamma )}. \end{aligned}$$
(15)

The next theorem means that the upper bound is well defined.

Theorem 8

Let \(\alpha _{a}\) be the default predictor step size and \(\gamma \in (0,\frac{1}{4\kappa +5} )\). Then \(\frac{7\gamma }{16pn}<\frac{1-2\gamma -(1-\gamma )\kappa \alpha _{a} ^{2}}{2q(1-\gamma )}<1\).

Proof

Since \(q=\frac{14\kappa +11}{16}>\frac{1}{2}\), it is clear that \(\frac{1-2\gamma -(1-\gamma )\kappa \alpha _{a}^{2}}{2q(1-\gamma )}<1\). As \(n\geq 2\) and \(p>q>\frac{1}{2}\), we have

$$ \frac{7\gamma }{16pn} \leq \frac{7\gamma }{16}. $$

According to (12), one has \(\kappa \alpha _{a}^{2}=\frac{ \kappa \gamma }{n(4\kappa +1)}\leq \frac{\gamma }{8}\), thus

$$ \frac{1-2\gamma -(1-\gamma )\kappa \alpha _{a}^{2}}{1-\gamma } \geq \frac{1-2 \gamma -(1-\gamma )\frac{\gamma }{8}}{1-\gamma }=2+ \frac{1}{\gamma -1}- \frac{\gamma }{8}. $$

From \(\gamma <\frac{1}{4\kappa +5}\leq \frac{1}{ 5}\), it follows that

$$ 2+\frac{1}{\gamma -1}-\frac{\gamma }{8} > \frac{7\gamma }{16}. $$

Therefore

$$ \frac{1-2\gamma -(1-\gamma )\kappa \alpha _{a}^{2}}{2q(1-\gamma )}>\frac{7 \gamma }{16pn}, $$

which completes the proof. □

In the following theorem, we obtain an upper bound of the iteration number.

Theorem 9

Algorithm 1 stops after at most

$$ O \biggl((14\kappa +11)\sqrt{(1+4\kappa ) (1+2\kappa )}~n\log \frac{(x ^{0})^{T}s^{0}}{\varepsilon } \biggr) $$

iterations with a solution for which \(x^{T}s\leq \varepsilon \).

Proof

If \(\alpha _{a}\geq 0.3\) and \(\alpha \geq \frac{7\gamma }{16pn}\), then Algorithm 1 adopts Mehrotra’s strategy in the corrector step, i.e., \(\mu = (\frac{g_{a}}{g} )^{2}\frac{g_{a}}{n}\). Based on (13), we have

$$\begin{aligned} \mu _{g}(\alpha ) \leq & \biggl[1- \biggl(1-\kappa \alpha _{a}^{2}-q \alpha -\frac{\mu }{\mu _{g}} \biggr)\alpha \biggr] \mu _{g} \\ \leq & \biggl[1- \biggl(1-\kappa \alpha _{a}^{2}-q \frac{1-2\gamma -(1- \gamma )\kappa \alpha _{a}^{2}}{2q(1-\gamma )}- \biggl(1-\frac{3}{4} \alpha _{a} \biggr)^{3} \biggr)\alpha \biggr]\mu _{g} \\ \leq & \biggl[1- \biggl(1-0.0125-\frac{1-2\gamma }{2(1-\gamma )}-0.47 \biggr) \alpha \biggr] \mu _{g} \\ \leq & \biggl[1-\frac{\gamma }{2(1-\gamma )}\alpha \biggr]\mu _{g} \\ \leq & \biggl[1-\frac{7\gamma ^{2}}{32pn(1-\gamma )} \biggr]\mu _{g}, \end{aligned}$$

where the second inequality follows from (10) and (15), the third inequality is due to \(\kappa \alpha _{a}^{2}\leq 0.025\) and \(1-\frac{3}{4}\alpha _{a}\leq \frac{31}{40}\), and the fourth inequality comes from \(1-0.0125-0.47 \geq \frac{1}{2}\).

If \(\alpha _{a}<0.3\) or \(\alpha <\frac{7\gamma }{16pn}\), then Algorithm 1 adopts the safeguard strategy in the corrector step, that is, \(\mu =\frac{\gamma }{1-\gamma }\mu _{g}\). In this case, from \(\frac{7\gamma }{16pn} \leq \alpha \leq \frac{1-2\gamma -(1-\gamma ) \kappa \alpha _{a}^{2}}{2q(1-\gamma )}\), we get

$$\begin{aligned} \mu _{g}(\alpha ) \leq & \biggl[1- \biggl(1-\kappa \alpha _{a}^{2}-q \alpha -\frac{\mu }{\mu _{g}} \biggr)\alpha \biggr] \mu _{g} \\ \leq & \biggl[1- \biggl(1-\kappa \alpha _{a}^{2}-q \frac{1-2\gamma -(1- \gamma )\kappa \alpha _{a}^{2}}{2q(1-\gamma )}- \frac{\gamma }{1- \gamma } \biggr)\alpha \biggr]\mu _{g} \\ =& \biggl[1- \biggl(\frac{1-2\gamma }{2(1-\gamma )}-\frac{\kappa \alpha _{a}^{2}}{2} \biggr)\alpha \biggr] \mu _{g} \\ \leq & \biggl[1-\frac{(39-79\gamma )7\gamma }{1280pn(1-\gamma )} \biggr] \mu _{g}, \end{aligned}$$

where the last inequality follows from \(\kappa \alpha _{a}^{2}\leq 0.025\) and \(\alpha \geq \frac{7\gamma }{16pn}\). This completes the proof by Theorem 3.2 of [1]. □

4 Numerical results

In this section, some numerical results are reported. The results are obtained by using MATLAB R2014a.

The algorithm presented in this paper is compared with a Mizuno–Todd–Ye (MTY) type predictor–corrector algorithm [6], Cho’s algorithm [8] and an interior point algorithm based on the classical kernel function \(\varphi (t)= \frac{t^{2}-1}{2}-\log {t}\) (IPMCKF) [20]. We consider the following two problems.

Problem 1

([21])

$$ M= \begin{pmatrix} 0&1 \\ -2&0 \end{pmatrix},\qquad q= \begin{pmatrix} 2 \\ 3 \end{pmatrix},\qquad x_{0}= \begin{pmatrix} 0.4 \\ 0.45 \end{pmatrix}, \qquad s_{0}= \begin{pmatrix} 2.45 \\ 2.2 \end{pmatrix}. $$

Problem 2

([22])

$$ M= \begin{pmatrix} 1&2&2&\cdots &2 \\ 2&5&6&\cdots &6 \\ 2&6&9&\cdots &10 \\ \vdots &\vdots &\vdots &\vdots &\vdots \\ 2&6&10&\cdots &4n-3 \end{pmatrix},\qquad q= \begin{pmatrix} -1\\ -1\\ -1\\ \vdots \\ -1 \end{pmatrix},\qquad x_{0}= \begin{pmatrix} 1\\ 1\\ 1\\ \vdots \\ 1 \end{pmatrix}. $$

Problem 1 is a \(P_{*}(\frac{1}{4})\) LCP, and Problem 2 is a \(P_{*}(0)\) LCP. We solve the two problems by using the above-mentioned algorithms. For all algorithms we set the accuracy parameter \(\epsilon =10^{-8}\). In Algorithm 1, we set \(\gamma =0.01\).

Table 1 shows the iteration numbers of Algorithm 1, MTY algorithm, Cho’s algorithm and IPMCKF algorithm for Problem 1. From the results we conclude that Algorithm 1 reduced the iteration numbers.

Table 1 The iteration numbers of Problem 1

Table 2 gives the iteration numbers of the four algorithms for Problem 2 with \(n \in \{10,20,30,40,50,100,150,200\}\). The numerical results illustrate that Algorithm 1 has the least iteration numbers. Since the safeguard step helps Algorithm 1 to avoid small steps, Algorithm 1 is efficient.

Table 2 The iteration numbers of Problem 2

5 Concluding remarks

In this paper, a Mehrotra-type predictor–corrector algorithm for \(P_{*}(\kappa )\) LCPs is studied. Since \(P_{*}(\kappa )\) LCPs are the generalization of linear programming, the search directions Δx and Δs are not orthogonal, therefore the analysis is different from that in linear programming. The iteration bound of our algorithm is \(O ((14\kappa +11) \sqrt{(1+4\kappa )(1+2\kappa )}~n\log \frac{(x^{0})^{T}s^{0}}{\varepsilon } )\). Numerical results show that this algorithm is efficient.

References

  1. Wright, S.J.: Primal–Dual Interior-Point Methods. Society for Industrial and Applied Mathematics, Philadelphia (1997)

    Book  MATH  Google Scholar 

  2. Billups, S.C., Murty, K.G.: Complementarity problems. J. Comput. Appl. Math. 124(1), 303–318 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  3. Cottle, R.W., Pang, J.S., Stone, R.E.: The Linear Complementarity Problem. Academic Press, New York (1992)

    MATH  Google Scholar 

  4. Roos, C., Terlaky, T., Vial, J.P.: Interior Point Methods for Linear Optimization. Springer, Boston (2006)

    MATH  Google Scholar 

  5. Kojima, M., Megiddo, N., Noma, T., Yoshise, A.: A Unified Approach to Interior Point Algorithms for Linear Complementarity Problems. Springer, Berlin (1991)

    Book  MATH  Google Scholar 

  6. Illes, T., Nagy, M.: A Mizuno–Todd–Ye type predictor–corrector algorithm for sufficient linear complementarity problems. Eur. J. Oper. Res. 181(3), 1097–1111 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  7. Miao, J.: A quadratically convergent \(O((1+\kappa )\sqrt {n}{L})\)-iteration algorithm for the \(P_{*}(\kappa )\)-matrix linear complementarity problem. Math. Program. 69(1), 355–368 (1995)

    Google Scholar 

  8. Cho, G.M.: A new large-update interior point algorithm for \(P_{*}(\kappa )\) linear complementarity problems. J. Comput. Appl. Math. 216(1), 265–278 (2008)

    MathSciNet  MATH  Google Scholar 

  9. Cho, G.M., Kim, M.K.: A new large-update interior point algorithm for \(P_{*}(\kappa )\) LCPs based on kernel functions. Appl. Math. Comput. 182(2), 1169–1183 (2006)

    MathSciNet  MATH  Google Scholar 

  10. Liu, H., Liu, X., Liu, C.: Mehrotra-type predictor–corrector algorithms for sufficient linear complementarity problem. Appl. Numer. Math. 62(12), 1685–1700 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  11. Czyzyk, J., Mehrotra, S., Wagner, M., Wright, S.: Pcx: an interior-point code for linear programming. Optim. Methods Softw. 11(1–4), 397–430 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  12. Zhang, Y.: Solving large-scale linear programs by interior-point methods under the Matlab environment. Optim. Methods Softw. 10(1), 1–31 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  13. Mehrotra, S.: On finding a vertex solution using interior point methods. Linear Algebra Appl. 152(91), 233–253 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  14. Jarre, F., Wechs, M.: Extending Mehrotra’s corrector for linear programs. Adv. Model. Optim. 1(2), 38–60 (1999)

    MATH  Google Scholar 

  15. Zhang, Y., Zhang, D.: On polynomiality of the Mehrotra-type predictor–corrector interior-point algorithms. Math. Program. 68(1), 303–318 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  16. Salahi, M., Peng, J., Terlaky, T.: On Mehrotra-type predictor–corrector algorithms. SIAM J. Optim. 18(4), 1377–1397 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  17. Salahi, M., Terlaky, T.: Mehrotra-type predictor–corrector algorithm revisited. Optim. Methods Softw. 23(2), 259–273 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  18. Liu, X., Liu, H., Wang, W.: Polynomial convergence of Mehrotra-type predictor–corrector algorithm for the Cartesian \(P_{*}(\kappa )\)-LCP over symmetric cones. Optimization 64(4), 815–837 (2015)

    MathSciNet  MATH  Google Scholar 

  19. Yang, X., Liu, H., Dong, X.: Polynomial convergence of Mehrotra-type prediction-corrector infeasible-IPM for symmetric optimization based on the commutative class directions. Appl. Math. Comput. 230(2), 616–628 (2014)

    MathSciNet  MATH  Google Scholar 

  20. Lesaja, C., Roos, C.: Unified analysis of kernel-based interior-point methods for \(P_{*}(\kappa )\)-linear complementarity problems. SIAM J. Optim. 20(6), 3014–3039 (2010)

    MathSciNet  MATH  Google Scholar 

  21. Lee, Y.-H., Cho, Y.-Y., Cho, G.-M.: Interior-point algorithms for \(P_{*}(\kappa )\) LCP based on a new class of kernel functions. J. Glob. Optim. 58, 137–149 (2014)

    MATH  Google Scholar 

  22. Harker, P.T., Pang, J.-S.: A damped-Newton method for the linear complementarity problem. Lect. Appl. Math. 26, 265–284 (1990)

    MathSciNet  MATH  Google Scholar 

Download references

Funding

This work was supported by the National Natural Science Foundation of China (71471102, 71561022 and 71771025).

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed equally to this work. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Yiyuan Zhou.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhou, Y., Zhang, M. & Huang, Z. On complexity of a new Mehrotra-type interior point algorithm for \(P_{*}(\kappa )\) linear complementarity problems. J Inequal Appl 2019, 6 (2019). https://doi.org/10.1186/s13660-019-1954-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-019-1954-5

MSC

Keywords