 Research
 Open Access
On the preconditioned GAOR method for a linear complementarity problem with an Mmatrix
 ShuXin Miao^{1} and
 Dan Zhang^{1}Email author
https://doi.org/10.1186/s1366001817895
© The Author(s) 2018
 Received: 18 March 2018
 Accepted: 20 July 2018
 Published: 27 July 2018
Abstract
Recently, based on the Hadjidimos preconditioner, a preconditioned GAOR method was proposed for solving the linear complementarity problem (Liu and Li in East Asian J. Appl. Math. 2:94–107, 2012). In this paper, we propose a new preconditioned GAOR method for solving the linear complementarity problem with an Mmatrix. The convergence of the proposed method is analyzed, and the comparison results are obtained to show it accelerates the convergence of the original GAOR method and the preconditioned GAOR method in (Liu and Li in East Asian J. Appl. Math. 2:94–107, 2012). Numerical examples verify the theoretical analysis.
Keywords
 Linear complementarity problem
 Preconditioner
 Preconditioned GAOR method
 Mmatrix
MSC
 65F10
 65F15
1 Introduction
The LCP of the form (1) arises in many scientific computing and engineering applications, for example, the Nash equilibrium point of a bimatrix game, the contact problem, and the free boundary problem for journal bearings; see [5, 16] and the references therein. As is known, LCP (1) possesses a unique solution if and only if \(A\in \mathbb{R}^{n\times n}\) is a Pmatrix, namely a matrix whose all principal submatrices have positive determinants; see [4, 5, 16]. A matrix A is called an Mmatrix if its inverse is nonnegative and all its offdiagonal entries are nonpositive. A positive diagonal Mmatrix is a Pmatrix, and LCP (1) with an Mmatrix has the unique solution [3].
Because of the wide applications, the research on the numerical methods for solving (1) has attracted much attention. There are some iterative methods for obtaining the solution of the LCP, including the projected methods [8, 9, 12], the modulus algorithms [10], and the modulusbased matrix splitting iterative methods [2, 6, 18, 19], see [9] for a survey of the iterative method for LCP (1). We consider the generalized AOR (GAOR) method [8, 12], which is a special case of the projected method, for solving LCP (1) with an Mmatrix. For accelerating the convergence rate of the GAOR method, the preconditioned GAOR method is proposed in [13], based on the preconditioner in [7], for LCP (1) with an Mmatrix. In this paper, a new preconditioner is proposed to accelerate the convergence rate of the GAOR method for solving LCP (1).
The outline of the rest of the paper is as follows. In Sect. 2, some preliminaries about the projected method are reviewed, and the new preconditioner for preconditioned GAOR method is introduced. Convergence analysis is given in Sect. 3. The convergence rates of the proposed preconditioned GAOR method are compared with the convergence rates of the preconditioned GAOR method in [13] and the convergence rates of the original GAOR method for LCP with an Mmatrix in Sect. 4, which shows that the proposed preconditioned GAOR converges faster than the preconditioned GAOR method in [13] and the original GAOR method. Numerical examples are given to verify the theoretical results in Sect. 5. Finally, conclusions are drawn in Sect. 6.
2 Preliminaries
We give some of the notations, definitions, and lemmas which will be used in the sequel. For \(A=(a_{i,j})\), \(B=(b_{i,j})\in R^{n\times n}\), we write \(A\geq B\) if \(a_{i,j}\geq b_{i,j}\) holds for all \(i,j=1,2, \ldots ,n\). \(A\geq O\) is called nonnegative if \(a_{i,j}\geq 0\) for all \(i,j=1,2,\ldots ,n\), where O is an \(n\times n\) zero matrix. For the vectors \(a, b\in R^{n\times 1}\), \(a\geq b\) and \(a\geq 0\) can be defined in a similar manner. By \(A=(a_{ij})\) we define the absolute value of a given matrix \(A\in \mathbb{R}^{n\times n}\). We denote the \(n\times n\) diagonal matrix coinciding in its diagonal with matrix \(C\in \mathbb{R}^{n\times n}\) by \(\operatorname{diag}(C)\). For simplicity, we may assume that \(a_{ii}=1\) (\(i=1,2 \dots n \)).
Lemma 1
([17])
Let \(A = [a_{i j}] \in R^{n \times n}\) and \(a_{i j} \leq 0\) for \(i\neq j\). A is an Mmatrix if and only if there exists a positive vector y such that \(Ay > 0\).
There are equivalent conditions of an Mmatrix, a nonsingular Mmatrix is a monotone matrix, see [4].
Definition 1
([4])
For a matrix \(A\in \mathbb{R}^{n\times n}\), the representation \(A=MN\) is called a splitting of A if M is nonsingular. Then \(A=MN\) is called weak regular if \(M^{1}\geq 0\) and \(M^{1}N\geq 0\).
For the weak regular splittings of different monotone matrices, there is a comparison result as follows.
Lemma 2
([4])
The following lemma is taken from [11].
Lemma 3
([11])
Let A be an Mmatrix and x be a solution of LCP (1). If \(f_{i}>0\), then \(x_{i}>0\) and therefore \(\sum^{n}_{j=1} a_{ij}x_{j}f_{i}=0\). Moreover, if \(f\leq 0\), then \(x=0\) is the solution of LCP (1).
For the study of the projected methods, the following definition is needed.
Definition 2
([15])
 (i)
\((x+y)_{+}\leq x_{+}+y_{+}\);
 (ii)
\(x_{+}y_{+}\leq (xy)_{+}\);
 (iii)
\(x=x_{+}+(x)_{+}\);
 (iv)
\(x\leq y\Rightarrow x_{+}\leq y_{+}\).
Lemma 4
([12])
3 Convergence analysis
 (H0):

\(f_{1}>0\) and \(f_{n}>0\);
 (H1):

\(0\leq \gamma_{i}\leq 1\) for \(i=1,2,\ldots , n\);
 (H2):

\(\gamma_{1} a_{1n}+a_{1n}\leq \beta_{1}\leq \gamma_{1} a_{1n}\);
 (H3):

\(\gamma_{i} a_{i1}+a_{i1}\leq \beta_{i}\leq \gamma_{i} a_{i1} \) for \(i=2,3,\ldots ,n\).
Theorem 1
Proof
Lemma 5
If A is an Mmatrix, (H1)–(H3) hold, then \(\widehat{A}= \widehat{P}A\) is an Mmatrix.
Proof
If A is an Mmatrix, then \(a_{ij}<0\) for \(i\neq j\) and \(a_{1i}a_{i1}<1\), which leads to \(a_{1i}>1/a_{i1}\). Otherwise, from (8) and assumption (H2), \(\gamma_{1} a_{1n}+a _{1n}\leq \beta_{1}\leq \gamma_{1} a_{1n}\), we have that \(\beta_{1} + \gamma_{1} a_{1n}\leq 0\) and \(\beta_{1} +\gamma_{1} a_{1n}\geq a_{1n}> 1/a_{n1}\). Then \(\hat{a}_{11}= a_{11}(\beta_{1} +\gamma_{1} a_{1n})a _{n1} > 11/a_{n1}\ast a_{n1}=0 \).
From Lemma 1 there exists a positive vector \(y > 0\) such that \(Ay > 0\). Note that \(\widehat{P}>0\), thus \(\widehat{A} y = \widehat{P}Ay > 0\), and from Lemma 1, Â is an Mmatrix. □
Theorem 2
Let A be a diagonally dominant Mmatrix. If (H0)–(H3) hold, then for \(0\leq \omega_{i}\leq 2/[1+\rho (\hat{J})]\) (\(i=1,\ldots ,n\)) and \(0\leq \alpha \leq 1\), the iterative sequence of the preconditioned GAOR method (6) converges to the unique solution \(x^{\ast }\) of LCP (1), here \(\hat{J}=\widehat{D}^{1}(\widehat{L}+\widehat{U})\).
4 Comparison results
Lemma 6
If A is an Mmatrix with diagonal elements 1, (H1)–(H3) hold, then \(0\leq \widehat{D}\leq \widetilde{D}\).
Proof
Theorem 3
Proof
The proof is completed. □
Remark 1
5 Numerical example
In this section, two examples are given for verifying the theoretical result.
Example 1
\(\rho (G^{1}F)\), \(\rho (\widetilde{G}^{1}\widetilde{F})\), and \(\rho (\widehat{G}^{1}\widehat{F})\) with \(\alpha =0.1\) and \(\omega_{i}=0.1\)
Preconditioner  \((\gamma _{1},\ldots ,\gamma _{5})^{T}\)  \((\beta _{1},\ldots ,\beta _{5})^{T}\)  ρ(⋅) 

I  –  –  0.96934 
P̃  \((0,1,1,1,0.1)^{T}\)  \((0,0.1,0,0.01,0.05)^{T}\)  0.96311 
\((0,1,0,1,0)^{T}\)  \((0,0,0.04,0.04,0.05)^{T}\)  0.96750  
P̂  \((1,1,1,1,0.1)^{T}\)  \((0.03,0.1,0,0.01,0.05)^{T}\)  0.96292 
\((1,1,0,1,0)^{T}\)  \((0,0,0.04,0.04,0.05)^{T}\)  0.96708 
\(\rho (G^{1}F)\), \(\rho (\widetilde{G}^{1}\widetilde{F})\), and \(\rho (\widehat{G}^{1}\widehat{F})\) with \(\alpha =0.1\) and \(\omega_{i}=0.9\)
Preconditioner  \((\gamma _{1},\ldots ,\gamma _{5})^{T}\)  \((\beta _{1},\ldots ,\beta _{5})^{T}\)  ρ(⋅) 

I  –  –  0.71690 
P̃  \((0,1,1,1,0.1)^{T}\)  \((0,0.1,0,0.01,0.05)^{T}\)  0.67388 
\((0,1,0,1,0)^{T}\)  \((0,0,0.04,0.04,0.05)^{T}\)  0.70036  
P̂  \((1,1,1,1,0.1)^{T}\)  \((0.03,0.1,0,0.01,0.05)^{T}\)  0.67153 
\((1,1,0,1,0)^{T}\)  \((0,0,0.04,0.04,0.05)^{T}\)  0.69654 
Example 2
\(\rho (G^{1}F)\), \(\rho (\widetilde{G}^{1}\widetilde{F})\), and \(\rho (\hat{G}^{1}\hat{F})\) with \(\alpha =0.1\) and \(\omega_{i}=0.6\) for Example 2
n  I  P̃  P̂ 

5  0.46078  0.46040  0.46038 
10  0.55689  0.55636  0.55635 
15  0.65889  0.65833  0.65832 
20  0.763952  0.763399  0.763395 
\(\rho (G^{1}F)\), \(\rho (\widetilde{G}^{1}\widetilde{F})\), and \(\rho (\hat{G}^{1}\hat{F})\) with \(\alpha =0.2\) and \(\omega_{i}=0.7\) for Example 2
n  I  P̃  P̂ 

5  0.37475  0.37400  0.37398 
10  0.49391  0.49285  0.49284 
15  0.62177  0.62063  0.62062 
20  0.75501  0.75386  0.75385 
6 Conclusions
In this paper, we present a new preconditioner P̂, which provides the preconditioning effect on all the rows of A, to accelerate the convergence rate of the GAOR method to solve LCP (1) with an Mmatrix A, and consider the preconditioned GAOR method (6). We prove that the original LCP (1) is equivalent to LCP (7), and show that the preconditioned GAOR method (6) is convergent for solving LCP (1). Then a comparison theorem on the preconditioned GAOR method (6) is obtained, which shows that the preconditioned GAOR method (6) improves the convergence rate of the preconditioned GAOR method in [13] for solving LCP (1). Together with the comparison result in [12], we know that the preconditioned GAOR method (6) improves considerably the convergence rate of the original GAOR method for solving LCP (1).
Declarations
Acknowledgements
The authors would like to thank the editors and reviewers for their valuable comments, which greatly improved the readability of this paper.
Funding
This paper is supported by the China Postdoctoral Science Foundation (No.2017M613244).
Authors’ contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Ahn, B.H.: Solution of nonsymmetric linear complementarity problems by iterative methods. J. Optim. Theory Appl. 33, 185–197 (1981) MathSciNetView ArticleMATHGoogle Scholar
 Bai, Z.Z.: Modulusbased matrix splitting iteration methods for linear complementarity problems. Numer. Linear Algebra Appl. 17, 917–933 (2010) MathSciNetView ArticleMATHGoogle Scholar
 Bai, Z.Z., Evans, D.J.: Matrix multisplitting relaxation methods for linear complementarity problems. Int. J. Comput. Math. 63, 309–326 (1997) MathSciNetView ArticleMATHGoogle Scholar
 Berman, A., Plemmons, R.J.: Nonnegative Matrices in the Mathematical Sciences. SIAM, Philadelphia (1994) View ArticleMATHGoogle Scholar
 Cottle, R., Pang, J.S., Stone, R.M.: The Linear Complementarity Problem. Academic Press, San Diego (1992) MATHGoogle Scholar
 Dong, J.L., Jiang, M.Q.: A modified modulus method for symmetric positivedefinite linear complementarity problems. Numer. Linear Algebra Appl. 16, 129–143 (2009) MathSciNetView ArticleMATHGoogle Scholar
 Hadjidimos, A., Noutsos, D., Tzoumas, M.: More on modifications and improvements of classical iterative schemes for Mmatrices. Linear Algebra Appl. 364, 253–279 (2003) MathSciNetView ArticleMATHGoogle Scholar
 Hadjidimos, A., Tzoumas, M.: On the solution of the linear complementarity problem by the generalized accelerated overrelaxation iterative method. J. Optim. Theory Appl. 165, 545–562 (2015) MathSciNetView ArticleMATHGoogle Scholar
 Hadjidimos, A., Tzoumas, M.: The solution of the linear complementarity problem by the matrix analogue of the accelerated overrelaxation iterative method. Numer. Algorithms 73, 665–684 (2016) MathSciNetView ArticleMATHGoogle Scholar
 Kappel, N.W., Watson, L.T.: Iterative algorithms for the linear complementarity problems. Int. J. Comput. Math. 19, 273–297 (1986) View ArticleMATHGoogle Scholar
 Li, D.H., Zeng, J.P., Zhang, Z.: Gaussian pivoting method for solving linear complementarity problem. Appl. Math. J. Chin. Univ. Ser. B 12, 419–426 (1997) MathSciNetView ArticleMATHGoogle Scholar
 Li, Y., Dai, P.: Generalized AOR for linear complementarity problem. Appl. Math. Comput. 188, 7–18 (2007) MathSciNetMATHGoogle Scholar
 Liu, C., Li, C.: A new preconditioned generalised AOR method for the linear complementarity problem based on a generalised Hadjidimos preconditioner. East Asian J. Appl. Math. 2, 94–107 (2012) MathSciNetView ArticleMATHGoogle Scholar
 Liu, Y., Zhang, R., Wang, Y., Huang, X.: Comparison analysis on preconditioned GAOR method for linear complementarity problem. J. Inf. Comput. Sci. 9, 4493–4500 (2012) Google Scholar
 Mangasarian, O.L.: Solution of symmetric linear complementarity problems by iterative methods. J. Optim. Theory Appl. 22, 465–485 (1977) MathSciNetView ArticleMATHGoogle Scholar
 Murty, K.: Linear Complementarity, Linear and Nonlinear Programming. Heldermann, Berlin (1988) MATHGoogle Scholar
 Yip, E.L.: A necessary and sufficient condition for Mmatrices and its relation to block LU factorization. Linear Algebra Appl. 235, 261–274 (1995) MathSciNetView ArticleMATHGoogle Scholar
 Zhang, L.L., Ren, Z.R.: Improved convergence theorems of modulusbased matrix splitting iteration methods for linear complementarity problems. Appl. Math. Lett. 26, 638–642 (2013) MathSciNetView ArticleMATHGoogle Scholar
 Zheng, N., Yin, J.F.: Accelerated modulusbased matrix splitting iteration methods for linear complementarity problems. Numer. Algorithms 64, 245–262 (2013) MathSciNetView ArticleMATHGoogle Scholar