Kernel function based interior-point methods for horizontal linear complementarity problems
© Lee et al.; licensee Springer 2013
Received: 30 November 2012
Accepted: 12 April 2013
Published: 29 April 2013
It is well known that each kernel function defines an interior-point algorithm. In this paper we propose new classes of kernel functions whose form is different from known kernel functions and define interior-point methods (IPMs) based on these functions whose barrier term is exponential power of exponential functions for -horizontal linear complementarity problems (HLCPs). New search directions and proximity measures are defined by these kernel functions. We obtain so far the best known complexity results for large- and small-update methods.
In this paper we consider -horizontal linear complementarity problem (HLCP) as follows.
where , , and .
-HLCPs have many applications in economic equilibrium problems, noncooperative games, traffic assignment problems, and optimization problems [1, 2]. -HLCP (1) includes the standard linear complementarity problem (LCP), linear, and quadratic optimization problems. Indeed, when N is nonsingular, then -HLCP reduces to -LCP. Furthermore, when , -HLCP is monotone LCP.
Recently, Bai et al.  defined the concept of eligible kernel functions which require four conditions and proposed primal-dual IPMs for linear optimization (LO) problems based on these functions, and some of these methods achieved the best known complexity results for both large- and small-update methods. Cho  and Cho et al.  extended these algorithms for LO to -linear complementarity problems (LCPs) and obtained the similar complexity results as LO problems for large-update methods. Amini et al. [6, 7] introduced new IPMs based on parametric versions of kernel functions in  and obtained the better iteration bounds than the bound of the algorithm in  with numerical tests. Wang et al.  generalized polynomial IPMs for LO problem to -HLCP based on a finite kernel function, which was first defined in , and obtained the same iteration bounds for large- and small-update methods as an LO problem. Ghami et al.  extended IPMs for LO problems to the -LCPs based on eligible kernel functions, which were defined in , and proposed large- as well as small-update methods. Lesaja et al.  also proposed IPMs for -LCPs based on ten kernel functions which were defined for LO problems. Ghami et al.  proposed IPM for an LO problem based on a kernel function whose barrier term is a trigonometric function. However, this method does not have the best known iteration bound for a large-update method. Cho et al.  defined a new kernel function, whose barrier term is the exponential power of the exponential function for LO problems, and obtained the best known iteration bounds for large- and small-update methods.
Motivated by these works, we introduce new classes of eligible kernel functions, which are different from known kernel functions in [3, 6, 7] and have the exponential power of exponential barrier term, and propose a complexity analysis of the IPMs for -HLCP based on these kernel functions. We show that these algorithms have and iteration bounds for large- and small-update methods, respectively, which are currently the best known iteration bounds for such methods.
The paper is organized as follows. In Section 2 we propose some basic concepts and a generic interior point algorithm for -HLCP. In Section 3 we introduce new classes of eligible kernel functions and their technical properties. Finally, we derive the framework for analyzing the iteration bounds and the complexity results of the algorithms based on these kernel functions in Section 4.
Notational conventions: and denote the sets of n-dimensional nonnegative vectors and positive vectors, respectively. For , , xs, and denote the smallest component of the vector x, the componentwise product of the vectors x and s, and the column vector , respectively. We denote by D the diagonal matrix from a vector d, i.e., . e denotes the n-dimensional vector of ones. For , , if for some positive constant and if for some positive constants and .
In this section we recall some basic definitions and introduce a generic interior point algorithm for -HLCP.
Definition 2.1 
M is called a positive semidefinite matrix if .
M is called a -matrix if there exists an index such that and .
- (iii)M is called a -matrix if
where denotes the i th component of the vector Mx, , and .
Definition 2.2 
is called a monotone pair if implies .
is called a -pair if and implies that there exists an index such that or , and .
is called a -pair if implies that , where .
is a nonsingular matrix for any positive diagonal matrices .
Proof Assume that the matrix is singular. Then for some nonzero , i.e., and , . Hence , and we have an index such that or , and , since is a -pair. On the other hand, . This is a contradiction. This completes the proof. □
Since the class of -pairs includes the class of -pairs, we obtain the following corollary.
has a unique solution .
Without loss of generality, we assume that (1) satisfies the interior-point condition (IPC), i.e., there exists such that . Since is a -pair and (1) satisfies IPC, the system (2) has a unique solution for each , which is called the μ-center. The set of μ-centers is called the central path of (1). The limit of the central path exists, and since the limit point satisfies (1), it naturally yields the solution for (1) . IPMs follow this central path approximately and approach the solution of (1) as .
If and , then the algorithm is called a large-update method. When and , we call the algorithm a small-update method.
3 New kernel function
In this section we define new classes of kernel functions and give their essential properties.
, , p ≥ 1, r ≥ 1
, , p ≥ 1, r ≥ 1
The first two derivatives of the kernel functions
The third derivative of the kernel functions
In the following lemma, we show that , , are eligible .
, , i.e., ψ is exponential convex,
Conditions (a) and (b)
Remark 3.2 For , , let , .
Since , , from Table 2, , , are monotonically decreasing with respect to .
By the same way as (i), we obtain the result (ii). This completes the proof. □
For (ii), by the same way as above, we obtain the result. This completes the proof. □
Similarly, we obtain the result (ii). This completes the proof. □
The following lemma gives a relation between two proximity measures.
Hence we have .
For (ii), by the same way as above, we obtain the result. This completes the proof. □
Using the eligible conditions (b) and (c) in Lemma 3.1, we obtain the following lemma.
Lemma 3.7 (Theorem 3.2 in )
In the following lemma, we give upper bounds of , , after a μ-update.
where the last inequality holds from , .
By the same way as the proof of (i), we obtain the result (ii). This completes the proof. □
We will use and for the upper bounds of from (14) for large- and small-update methods, respectively, .
Remark 3.9 For the large-update method with and , , , and for the small-update method with and , , .
For notational convenience, let and , .
For notational convenience, we denote and , .
In the following lemmas, we state same technical properties in .
Lemma 3.10 (Lemma 4.4 in )
Lemma 3.11 (Lemma 4.5 in )
Lemma 3.12 (Lemma 4.6 in )
Then we have .
Lemma 3.14 (Lemma 4.10 in )
The right-hand sides in Lemma 3.13 are monotonically decreasing with respect to δ.
Lemma 3.15 (Proposition 1.3.2 in )
where and . Then .
Proof From Lemma 3.15 and Lemma II. 17 in , the number of inner and outer iterations is given by and , respectively. For the total number of iterations, we multiply the number of inner iterations by that of outer iterations. Hence we have the desired results. This completes the proof. □
4 Application to new kernel functions
For the complexity analysis, we follow a similar framework in  for LO problems.
Framework for analyzing the iteration bounds
Define the kernel function ψ(t) and input initial values: τ ≥ 1, ϵ>0, 0<θ<1, , and such that .
Solve the equation to find ρ(z), the inverse function of , 0<t ≤ 1. If the equation is hard to solve, compute a lower bound for ρ(z).
Solve the equation ψ(t)=u to find ϱ(u), the inverse function of ψ(t), t ≥ 1. If the equation is hard to solve, compute an upper bound for ϱ(u).
Compute a lower bound for δ with respect to Ψ.
Compute the upper bound for Ψ(v).
Using Step 3, Step 4 and the default step size in (22), find λ>0 and γ, 0<γ ≤ 1, as small as possible such that .
Derive an upper bound for the total number of iterations from .
Let and θ = Θ(1) to compute an iteration bound for large-update method, and let and to get an iteration bound for small-update method.
Step 1: Using Lemma 3.3, , .
For the large-update method, .
For the small-update method, .
where the last inequality follows from and the assumption .
- (i)For large-update methods,
- (ii)For small-update methods,
Step 7: Using Step 6 and Remark 3.9, for the large-update method with and , the algorithm has complexity. For the small-update method with and , the algorithm has complexity. These are currently the best known complexity results.
Remark 4.1 For the kernel function in Table 1, by applying the framework, the algorithms have and iteration bounds for large- and small-update methods, respectively, where and . By taking and , the algorithm has complexity for large-update methods. Choosing and , the algorithm has for small-update methods. In conclusion, we obtain so far the best known iteration bounds of large- and small-update methods for kernel functions , , in Table 1.
This research of the first author was supported by the Basic Science Research Program through NRF funded by the Ministry of Education, Science, and Technology (No. 2012005767) and by the Research Fund Program of Research Institute for Basic Science, Pusan National University, Korea, 2012, Project No. RIBS-PNU-2012-102.
- Cottle RW, Pang JS, Stone RE: The Linear Complementarity Problem. Academic Press, San Diego; 1992.Google Scholar
- Wang GQ, Bai YQ:Polynomial interior-point algorithm for horizontal linear complementarity problem. J. Comput. Appl. Math. 2009, 233: 248–263. 10.1016/j.cam.2009.07.014MathSciNetView ArticleGoogle Scholar
- Bai YQ, Ghami ME, Roos C: A comparative study of kernel functions for primal-dual interior-point algorithms in linear optimization. SIAM J. Optim. 2004, 15: 101–128. 10.1137/S1052623403423114MathSciNetView ArticleGoogle Scholar
- Cho GM:A new large-update interior point algorithm for linear complementarity problems. J. Comput. Appl. Math. 2008, 216: 256–278.View ArticleGoogle Scholar
- Cho GM, Kim MK:A new large-update interior point algorithm for LCPs based on kernel functions. Appl. Math. Comput. 2006, 182: 1169–1183. 10.1016/j.amc.2006.04.060MathSciNetView ArticleGoogle Scholar
- Amini K, Haseli A: A new proximity function generating the best known iteration bounds for both large-update and small-update interior-point methods. ANZIAM J. 2007, 49: 259–270. 10.1017/S1446181100012827MathSciNetView ArticleGoogle Scholar
- Amini K, Peyghami MR:Exploring complexity of large update interior-point methods for linear complementarity problem based on kernel function. Appl. Math. Comput. 2009, 207: 501–513. 10.1016/j.amc.2008.11.002MathSciNetView ArticleGoogle Scholar
- Bai YQ, Ghami ME, Roos C: A new efficient large-update primal-dual interior-point method based on a finite barrier. SIAM J. Optim. 2003, 13: 766–782.View ArticleGoogle Scholar
- Ghami ME, Steihaug T:Kernel-function based primal-dual algorithms for linear complementarity problems. RAIRO. Rech. Opér. 2010, 44: 185–205.View ArticleGoogle Scholar
- Lesaja G, Roos C:Unified analysis of kernel-based interior-point methods for -linear complementarity problems. SIAM J. Optim. 2010, 20: 3014–3039. 10.1137/090766735MathSciNetView ArticleGoogle Scholar
- Ghami ME, Guennoun ZA, Bouali S, Steihaug T: Interior-point methods for linear optimization based on a kernel function with a trigonometric barrier term. J. Comput. Appl. Math. 2012, 236: 3613–3623. 10.1016/j.cam.2011.05.036MathSciNetView ArticleGoogle Scholar
- Cho GM, Cho YY, Lee YH: A primal-dual interior-point algorithm based on a new kernel function. ANZIAM J. 2010, 51: 476–491. 10.1017/S1446181110000908MathSciNetView ArticleGoogle Scholar
- Kojima M, Megiddo N, Noma T, Yoshise A Lecture Notes in Computer Science 538. In A Unified Approach to Interior Point Algorithms for Linear Complementarity Problems. Springer, Berlin; 1991.View ArticleGoogle Scholar
- Tütüncü RH, Todd MJ: Reducing horizontal linear complementarity problems. Linear Algebra Appl. 1995, 223/224: 717–729.View ArticleGoogle Scholar
- Jansen B, Roos K, Terlaky T, Yoshise A: Polynomiality of primal-dual affine scaling algorithms for nonlinear complementarity problems. Math. Program. 1997, 78: 315–345.MathSciNetGoogle Scholar
- Xiu N, Zhang J: A smoothing Gauss-Newton method for the generalized HLCP. J. Comput. Appl. Math. 2001, 129: 195–208. 10.1016/S0377-0427(00)00550-1MathSciNetView ArticleGoogle Scholar
- Peng J, Roos C, Terlaky T: Self-Regularity: A New Paradigm for Primal-Dual Interior-Point Algorithms. Princeton University Press, Princeton; 2002.Google Scholar
- Roos C, Terlaky T, Vial JP: Theory and Algorithms for Linear Optimization, an Interior Approach. Wiley, Chichester; 1997.Google Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.