- Research
- Open access
- Published:
Kernel function based interior-point methods for horizontal linear complementarity problems
Journal of Inequalities and Applications volume 2013, Article number: 215 (2013)
Abstract
It is well known that each kernel function defines an interior-point algorithm. In this paper we propose new classes of kernel functions whose form is different from known kernel functions and define interior-point methods (IPMs) based on these functions whose barrier term is exponential power of exponential functions for -horizontal linear complementarity problems (HLCPs). New search directions and proximity measures are defined by these kernel functions. We obtain so far the best known complexity results for large- and small-update methods.
1 Introduction
In this paper we consider -horizontal linear complementarity problem (HLCP) as follows.
Given , a -pair, , , and , find a pair such that
Note that is called a -pair if implies that
where , , and .
-HLCPs have many applications in economic equilibrium problems, noncooperative games, traffic assignment problems, and optimization problems [1, 2]. -HLCP (1) includes the standard linear complementarity problem (LCP), linear, and quadratic optimization problems. Indeed, when N is nonsingular, then -HLCP reduces to -LCP. Furthermore, when , -HLCP is monotone LCP.
Recently, Bai et al. [3] defined the concept of eligible kernel functions which require four conditions and proposed primal-dual IPMs for linear optimization (LO) problems based on these functions, and some of these methods achieved the best known complexity results for both large- and small-update methods. Cho [4] and Cho et al. [5] extended these algorithms for LO to -linear complementarity problems (LCPs) and obtained the similar complexity results as LO problems for large-update methods. Amini et al. [6, 7] introduced new IPMs based on parametric versions of kernel functions in [3] and obtained the better iteration bounds than the bound of the algorithm in [3] with numerical tests. Wang et al. [2] generalized polynomial IPMs for LO problem to -HLCP based on a finite kernel function, which was first defined in [8], and obtained the same iteration bounds for large- and small-update methods as an LO problem. Ghami et al. [9] extended IPMs for LO problems to the -LCPs based on eligible kernel functions, which were defined in [3], and proposed large- as well as small-update methods. Lesaja et al. [10] also proposed IPMs for -LCPs based on ten kernel functions which were defined for LO problems. Ghami et al. [11] proposed IPM for an LO problem based on a kernel function whose barrier term is a trigonometric function. However, this method does not have the best known iteration bound for a large-update method. Cho et al. [12] defined a new kernel function, whose barrier term is the exponential power of the exponential function for LO problems, and obtained the best known iteration bounds for large- and small-update methods.
Motivated by these works, we introduce new classes of eligible kernel functions, which are different from known kernel functions in [3, 6, 7] and have the exponential power of exponential barrier term, and propose a complexity analysis of the IPMs for -HLCP based on these kernel functions. We show that these algorithms have and iteration bounds for large- and small-update methods, respectively, which are currently the best known iteration bounds for such methods.
The paper is organized as follows. In Section 2 we propose some basic concepts and a generic interior point algorithm for -HLCP. In Section 3 we introduce new classes of eligible kernel functions and their technical properties. Finally, we derive the framework for analyzing the iteration bounds and the complexity results of the algorithms based on these kernel functions in Section 4.
Notational conventions: and denote the sets of n-dimensional nonnegative vectors and positive vectors, respectively. For , , xs, and denote the smallest component of the vector x, the componentwise product of the vectors x and s, and the column vector , respectively. We denote by D the diagonal matrix from a vector d, i.e., . e denotes the n-dimensional vector of ones. For , , if for some positive constant and if for some positive constants and .
2 Preliminaries
In this section we recall some basic definitions and introduce a generic interior point algorithm for -HLCP.
Definition 2.1 [13]
Let , , and .
-
(i)
M is called a positive semidefinite matrix if .
-
(ii)
M is called a -matrix if there exists an index such that and .
-
(iii)
M is called a -matrix if
where denotes the i th component of the vector Mx, , and .
Definition 2.2 [14]
Let , , and .
-
(i)
is called a monotone pair if implies .
-
(ii)
is called a -pair if and implies that there exists an index such that or , and .
-
(iii)
is called a -pair if implies that , where .
Lemma 2.3 If is a -pair, then
is a nonsingular matrix for any positive diagonal matrices .
Proof Assume that the matrix is singular. Then for some nonzero , i.e., and , . Hence , and we have an index such that or , and , since is a -pair. On the other hand, . This is a contradiction. This completes the proof. □
Since the class of -pairs includes the class of -pairs, we obtain the following corollary.
Corollary 2.4 Let be a -pair and . Then all the system
has a unique solution .
The basic idea of generic IPMs is to replace the second equation of (1) by the parameterized equation with , i.e., we consider the following system:
Without loss of generality, we assume that (1) satisfies the interior-point condition (IPC), i.e., there exists such that [15]. Since is a -pair and (1) satisfies IPC, the system (2) has a unique solution for each , which is called the μ-center. The set of μ-centers is called the central path of (1). The limit of the central path exists, and since the limit point satisfies (1), it naturally yields the solution for (1) [16]. IPMs follow this central path approximately and approach the solution of (1) as .
For given , by applying Newton’s method to the system (2), we have the Newton-system as follows:
By taking a step along the search direction , we define a new iteration , where for some ,
To have the motivation of a new algorithm, we define the following scaled vectors:
Using (5), we can rewrite the Newton-system (3) as follows:
where , , and . Note that the right-hand side of the second equation of (6) equals the negative gradient of the logarithmic barrier function and , i.e.,
The interior-point algorithm works as follows. Assume that we are given a strictly feasible point which is in a τ-neighborhood of the given μ-center. Then we update μ to for some fixed and solve the system (3) to obtain the search direction. The positivity condition of a new iteration is ensured with the right choice of the step size α. This procedure is repeated until we find a new iteration that is in a τ-neighborhood of the -center and then we let and . We repeat the process until (see Algorithm 1).
If and , then the algorithm is called a large-update method. When and , we call the algorithm a small-update method.
3 New kernel function
In this section we define new classes of kernel functions and give their essential properties.
is called a kernel function if ψ is twice differentiable and satisfies the following conditions:
We define new classes of kernel functions , , in Table 1 and give the first three derivatives of , , in Table 2 and Table 3.
In the following lemma, we show that , , are eligible [3].
Lemma 3.1 Let , , be defined as in Table 1. Then , , satisfy the following eligible conditions:
-
(a)
, , i.e., ψ is exponential convex,
-
(b)
, ,
-
(c)
, ,
-
(d)
, .
Proof From Table 4, Table 3, and Table 5, we show that , , satisfy eligible conditions (a)-(d). □
Remark 3.2 For , , let , .
From Table 2,
Since , , from Table 2, , , are monotonically decreasing with respect to .
Let and denote the inverse functions of the restriction of for and for , respectively, . Then
and
Lemma 3.3 Let , , be defined as in (10). Then we have, for , ,
-
(i)
, ,
-
(ii)
, .
Proof For (i), using (10) and Table 2, we have the equation
Since ,
By taking the natural logarithm on both sides of (12), we have . Hence we have
By the same way as (i), we obtain the result (ii). This completes the proof. □
Lemma 3.4 Let , , be defined as in Table 1. Then we have
-
(i)
, ,
-
(ii)
, .
Proof For (i), using the first condition of (8) and (9), we have
which proves the first inequality. The second inequality is obtained as follows:
For (ii), by the same way as above, we obtain the result. This completes the proof. □
Lemma 3.5 Let , , be defined as in (11). Then we have
-
(i)
, ,
-
(ii)
, .
Proof For (i), using the first inequality in Lemma 3.4, we have . Then we have
Similarly, we obtain the result (ii). This completes the proof. □
In this paper we replace the logarithmic barrier function in (7) by a strictly convex function as follows:
where
and , , are defined in Table 1. Since is strictly convex and minimal at , we have
Using (5) and (13), we modify the Newton-system (3) as follows:
By Corollary 2.4, the system (15) has a unique solution which is the modified Newton search direction. Consequently, we use as the proximity function to find a search direction and to measure the proximity between the current iteration and the μ-center. We also define the norm-based proximity measure , , as follows:
The following lemma gives a relation between two proximity measures.
Lemma 3.6 Let and , , be defined as in (16) and (14), respectively. Then we have
-
(i)
,
-
(ii)
.
Proof For (i), using (16) and the second inequality in Lemma 3.4, we have
Hence we have .
For (ii), by the same way as above, we obtain the result. This completes the proof. □
Using the eligible conditions (b) and (c) in Lemma 3.1, we obtain the following lemma.
Lemma 3.7 (Theorem 3.2 in [3])
Let , , be defined as in (11). Then we have
In the following lemma, we give upper bounds of , , after a μ-update.
Lemma 3.8 Let , , be defined as in (14), , and . If , , then we have
-
(i)
or ,
-
(ii)
or .
Proof For the first inequality of (i), using Remark 3.2 with and , we get
Using Lemma 3.7, (17), and Lemma 3.5(i), we have
For the second inequality of (i), using Taylor’s theorem, and , we have
for some ξ, . Since and , we have . Using Lemma 3.7, (18), and Lemma 3.5(i), we have
where the last inequality holds from , .
By the same way as the proof of (i), we obtain the result (ii). This completes the proof. □
Define
and
We will use and for the upper bounds of from (14) for large- and small-update methods, respectively, .
Remark 3.9 For the large-update method with and , , , and for the small-update method with and , , .
For fixed μ, if we take a step size α, using (4) and (5), we have new iterations
and
For fixed ,
For notational convenience, let and , .
For , we define
where is the difference of proximities between a new iteration and a current iteration for fixed μ. By the condition (a) in Lemma 3.1, we have
Hence we have , where
Then, we have . Differentiating with respect to α, we have
where and denote the i th components of the vectors and , respectively. Using (13) and (16), we have
By taking the derivative of with respect to α, we have
Since , is strictly convex in α unless . Since is a -pair and from (15), for ,
where , . Since and , we have
For notational convenience, we denote and , .
In the following lemmas, we state same technical properties in [5].
Lemma 3.10 (Lemma 4.4 in [5])
if α satisfies
Lemma 3.11 (Lemma 4.5 in [5])
Let , , be defined as in (10). Then, in the worst case, the largest step size α satisfying (21) is given by
Lemma 3.12 (Lemma 4.6 in [5])
Let ρ and be defined as in Lemma 3.11. Then
Define
Then we have .
Lemma 3.13 Let be defined as in (22). Then for , we have
Lemma 3.14 (Lemma 4.10 in [5])
The right-hand sides in Lemma 3.13 are monotonically decreasing with respect to δ.
Lemma 3.15 (Proposition 1.3.2 in [17])
Let be a sequence of positive numbers such that
where and . Then .
We define the value of after the μ-update as , and the subsequent values in the same outer iteration are denoted as , . Then we have
Theorem 3.16 Let a -HLCP be given. If , then the upper bound of a total number of iterations is given by
Proof From Lemma 3.15 and Lemma II. 17 in [18], the number of inner and outer iterations is given by and , respectively. For the total number of iterations, we multiply the number of inner iterations by that of outer iterations. Hence we have the desired results. This completes the proof. □
4 Application to new kernel functions
For the complexity analysis, we follow a similar framework in [3] for LO problems.
We apply the framework in Table 6 to the specific kernel function
Step 1: Using Lemma 3.3, , .
Step 2: By Lemma 3.5, the inverse function of for satisfies
Step 3: Using Lemma 3.6, we obtain
Step 4: Using (19) and from Table 2, we have the following:
-
(i)
For the large-update method, .
-
(ii)
For the small-update method, .
Step 5: Define . Using , Step 1, , and Table 2, we have
where the last inequality follows from the assumption . Using Lemma 3.13, Lemma 3.14, Lemma 3.6, and (23), we have
where the last inequality follows from and the assumption .
Step 6: Using Theorem 3.16, Step 4 with , and , and Step 5 with and , we have the upper bounds of the total number of iterations for large- and small-update methods as follows.
-
(i)
For large-update methods,
where .
-
(ii)
For small-update methods,
where .
Step 7: Using Step 6 and Remark 3.9, for the large-update method with and , the algorithm has complexity. For the small-update method with and , the algorithm has complexity. These are currently the best known complexity results.
Remark 4.1 For the kernel function in Table 1, by applying the framework, the algorithms have and iteration bounds for large- and small-update methods, respectively, where and . By taking and , the algorithm has complexity for large-update methods. Choosing and , the algorithm has for small-update methods. In conclusion, we obtain so far the best known iteration bounds of large- and small-update methods for kernel functions , , in Table 1.
References
Cottle RW, Pang JS, Stone RE: The Linear Complementarity Problem. Academic Press, San Diego; 1992.
Wang GQ, Bai YQ:Polynomial interior-point algorithm for horizontal linear complementarity problem. J. Comput. Appl. Math. 2009, 233: 248–263. 10.1016/j.cam.2009.07.014
Bai YQ, Ghami ME, Roos C: A comparative study of kernel functions for primal-dual interior-point algorithms in linear optimization. SIAM J. Optim. 2004, 15: 101–128. 10.1137/S1052623403423114
Cho GM:A new large-update interior point algorithm for linear complementarity problems. J. Comput. Appl. Math. 2008, 216: 256–278.
Cho GM, Kim MK:A new large-update interior point algorithm for LCPs based on kernel functions. Appl. Math. Comput. 2006, 182: 1169–1183. 10.1016/j.amc.2006.04.060
Amini K, Haseli A: A new proximity function generating the best known iteration bounds for both large-update and small-update interior-point methods. ANZIAM J. 2007, 49: 259–270. 10.1017/S1446181100012827
Amini K, Peyghami MR:Exploring complexity of large update interior-point methods for linear complementarity problem based on kernel function. Appl. Math. Comput. 2009, 207: 501–513. 10.1016/j.amc.2008.11.002
Bai YQ, Ghami ME, Roos C: A new efficient large-update primal-dual interior-point method based on a finite barrier. SIAM J. Optim. 2003, 13: 766–782.
Ghami ME, Steihaug T:Kernel-function based primal-dual algorithms for linear complementarity problems. RAIRO. Rech. Opér. 2010, 44: 185–205.
Lesaja G, Roos C:Unified analysis of kernel-based interior-point methods for -linear complementarity problems. SIAM J. Optim. 2010, 20: 3014–3039. 10.1137/090766735
Ghami ME, Guennoun ZA, Bouali S, Steihaug T: Interior-point methods for linear optimization based on a kernel function with a trigonometric barrier term. J. Comput. Appl. Math. 2012, 236: 3613–3623. 10.1016/j.cam.2011.05.036
Cho GM, Cho YY, Lee YH: A primal-dual interior-point algorithm based on a new kernel function. ANZIAM J. 2010, 51: 476–491. 10.1017/S1446181110000908
Kojima M, Megiddo N, Noma T, Yoshise A Lecture Notes in Computer Science 538. In A Unified Approach to Interior Point Algorithms for Linear Complementarity Problems. Springer, Berlin; 1991.
Tütüncü RH, Todd MJ: Reducing horizontal linear complementarity problems. Linear Algebra Appl. 1995, 223/224: 717–729.
Jansen B, Roos K, Terlaky T, Yoshise A: Polynomiality of primal-dual affine scaling algorithms for nonlinear complementarity problems. Math. Program. 1997, 78: 315–345.
Xiu N, Zhang J: A smoothing Gauss-Newton method for the generalized HLCP. J. Comput. Appl. Math. 2001, 129: 195–208. 10.1016/S0377-0427(00)00550-1
Peng J, Roos C, Terlaky T: Self-Regularity: A New Paradigm for Primal-Dual Interior-Point Algorithms. Princeton University Press, Princeton; 2002.
Roos C, Terlaky T, Vial JP: Theory and Algorithms for Linear Optimization, an Interior Approach. Wiley, Chichester; 1997.
Acknowledgements
This research of the first author was supported by the Basic Science Research Program through NRF funded by the Ministry of Education, Science, and Technology (No. 2012005767) and by the Research Fund Program of Research Institute for Basic Science, Pusan National University, Korea, 2012, Project No. RIBS-PNU-2012-102.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors have equally contributed in designing a new algorithm and obtaining complexity results. All authors read and approved the final manuscript.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Lee, YH., Cho, YY. & Cho, GM. Kernel function based interior-point methods for horizontal linear complementarity problems. J Inequal Appl 2013, 215 (2013). https://doi.org/10.1186/1029-242X-2013-215
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1029-242X-2013-215