- Research
- Open Access
- Published:
Convergence analysis of modulus-based matrix splitting iterative methods for implicit complementarity problems
Journal of Inequalities and Applications volume 2018, Article number: 2 (2018)
Abstract
In this paper, we demonstrate a complete version of the convergence theory of the modulus-based matrix splitting iteration methods for solving a class of implicit complementarity problems proposed by Hong and Li (Numer. Linear Algebra Appl. 23:629-641, 2016). New convergence conditions are presented when the system matrix is a positive-definite matrix and an \(H_{+}\)-matrix, respectively.
1 Introduction
Consider the following implicit complementarity problem [2], abbreviated ICP, of finding a solution \(u\in\mathbb{R}^{n}\) to
where \(A=(a_{ij})\in\mathbb{R}^{n\times n}\), \(q=(q_{1}, q_{2}, \ldots , q_{n})^{T}\in\mathbb{R}^{n}\), and \(m(\cdot)\) stands for a point-to-point mapping from \(\mathbb{R}^{n}\) into itself. We further assume that \(u-m(u)\) is invertible. Here \((\cdot)^{T}\) denotes the transpose of the corresponding vector. In the fields of scientific computing and economic applications, many problems can result in the solution of the ICP (1.1); see [3, 4]. In [2], the authors have shown how all kinds of complementarity problems can be transformed into the ICP (1.1). In the same paper, the authors have studied sufficient conditions of the existence and uniqueness of solution to the ICP (1.1). In particular, if the point-to-point mapping m is a zero mapping, then the ICP (1.1) is equivalent to
which is known as the linear complementarity problem (abbreviated LCP) [5].
In the past few decades, much more attention has been paid to find efficient iterative methods for solving the ICP (1.1). Based on a certain implicitly defined mapping F and the idea of iterative methods for solving LCP (1.2), Pang proposed a basic iterative method
where \(u^{(0)}\) is a given initial vector, and established the convergence theory. For more discussions on the mapping F and its role in the study of the ICP (1.1), see [6]. By changing variables, Noor equivalently reformulated the ICP (1.1) as a fixed-point problem, which can be solved by some unified and general iteration methods [7]. Under some suitable conditions, Zhan et al. [8] proposed a Schwarz method for solving the ICP (1.1). By reformulating the ICP (1.1) into an optimization problem, Yuan and Yin [9] proposed some variants of the Newton method.
Recently, the modulus-based iteration methods [10], which were first proposed for solving the LCP (1.2), have attracted attention of many researchers due to their promising performance and elegant mathematical properties. The basic idea of the modulus iteration method is transforming the LCP into an implicit fixed-point equation (i.e., the absolute equation [11]). To accelerate the convergence rate of the modulus iteration method, Dong and Jiang [12] introduced a parameter and proposed a modified modulus iteration method. They showed that the modified modulus iteration method is convergent unconditionally for solving the LCP when the system matrix A is positive-definite. Bai [13] presented a class of modulus-based matrix splitting (MMS) iteration methods, which inherit the merits of the modulus iteration method. Some general cases of the MMS methods have been studied in [14–18]. Hong and Li extended the MMS methods to solve the ICP (1.1). Numerical results showed that the MMS iteration methods are more efficient than the well-known Newton method and the classical projection fixed-point iteration methods [1]. In this paper, we further consider the iteration scheme of the MMS iteration method and will demonstrate a complete version about the convergence theory of the MMS iteration methods. New convergence conditions are presented when the system matrix is a positive-definite matrix and an \(H_{+}\)-matrix, respectively.
The outline of this paper is as follows. In Section 2, we give some preliminaries. In Section 3, we introduce the MMS iteration methods for solving the ICP (1.1). We give a complete version of convergence analysis of the MMS iteration methods in Section 4. Finally, we end this paper with some conclusions in Section 5.
2 Preliminaries
In this section, we recall some useful notations, definitions, and lemmas, which will be used in analyzing the convergence of the MMS iteration method for solving the ICP (1.1).
Let \(A=(a_{ij}), B=(b_{ij})\in\mathbb{R}^{m\times n}\) (\(1\leq i\leq m\), \(1\leq j\leq n\)) be two matrices. If their elements satisfy \(a_{ij}\geq b_{ij}\) \((a_{ij}>b_{ij})\), then we say that \(A\geq B\) \((A>B)\). If \(a_{ij}\geq0\) \((a_{ij}>0)\), then \(A=(a_{ij})\in\mathbb {R}^{m\times n}\) is said to be a nonnegative (positive) matrix. If \(a_{ij}\leq0\) for any \(i\neq j\), then A is called a Z-matrix. Furthermore, if A is a Z-matrix and \(A^{-1}\geq0\), then A is called an M-matrix. A matrix \(\langle A\rangle=(\langle a\rangle _{ij})\in\mathbb{R}^{n\times n}\) is called the comparison matrix of a matrix A if the elements \(\langle a\rangle_{ij}\) satisfy
A matrix A is called an H-matrix if its comparison matrix \(\langle A\rangle\) is an M-matrix, and an \(H_{+}\)-matrix if it is an H-matrix and its diagonal entries are positive; see [19]. A matrix A is called a symmetric positive-definite if A is symmetric and satisfies \(x^{T}Ax>0\) for all \(x\in\mathbb{R}^{n}\setminus\{0\} \). In addition, \(A=F-G\) is said to be a splitting of the matrix A if F is a nonsingular matrix, and an H-compatible splitting if it satisfies \(\langle A\rangle=\langle F\rangle- \vert G \vert \).
We use \(\vert A \vert =( \vert a_{ij} \vert )\) and \(\Vert A \Vert _{2}\) to denote the absolute and Euclidean norms of a matrix A, respectively. These symbols are easily generalized to the vectors in \(\mathbb{R}^{n}\); \(\sigma(A)\), \(\rho (A)\), and \(\operatorname{diag}(A)\) represent the spectrum, spectral radius, and diagonal part of a matrix A, respectively.
Lemma 2.1
([20])
Suppose that \(A\in\mathbb{R}^{n\times n}\) is an M-matrix and \(B\in \mathbb{R}^{n\times n}\) is a Z-matrix. Then B is called an M-matrix if \(A\leq B\).
Lemma 2.2
([21])
If \(A\in\mathbb{R}^{n\times n}\) is an \(H_{+}\)-matrix, then \(\vert A \vert \leq \langle A\rangle^{-1}\).
Lemma 2.3
([22])
Let \(A\in\mathbb{R}^{n\times n}\). Then \(\rho(A)<1\) iff \(\lim _{n\longrightarrow\infty}A^{n}=0\).
3 Modulus-based matrix splitting iteration methods for ICP
Suppose that \(u-m(u)=\frac{1}{\gamma}( \vert x \vert +x)\), \(w=\frac{1}{\gamma }\Omega( \vert x \vert -x)\), and \(g(u)=u-m(u)\). By the assumptions we have \(u=g^{-1}[{\frac{1}{\gamma}( \vert x \vert +x)}]\). To present the MMS iteration method, we first give a lemma that shows that the ICP (1.1) is equivalent to a fixed-point equation.
Lemma 3.1
([1])
Let \(A=F-G\) be a splitting of the matrix \(A\in\mathbb{R}^{n\times n}\), let γ be a positive constant, and let Ω be a positive diagonal matrix. For the ICP (1.1), the following statements hold:
-
(a)
If \((u, w)\) is a solution of the ICP (1.1), then \(x=\frac{\gamma}{2}(u-\Omega^{-1}w-m(u))\) satisfies the implicit fixed-point equation
$$ (\Omega+F)x=Gx+(\Omega-A) \vert x \vert -\gamma A m \biggl[g^{-1}\biggl(\frac{1}{\gamma }\bigl( \vert x \vert +x\bigr) \biggr)\biggr]-\gamma q. $$(3.1) -
(b)
If x satisfies the implicit fixed-point equation (3.1), then
$$ u=\frac{1}{\gamma}\bigl( \vert x \vert +x\bigr)+m(u)\quad \textit{and} \quad w=\frac {1}{\gamma}\Omega\bigl( \vert x \vert -x\bigr) $$(3.2)is a solution of the ICP (1.1).
Define
Then based on the implicit fixed-point equation (3.1), Hong and Li [1] established the following MMS iteration methods for solving the ICP (1.1).
Method 3.1
([1] The MMS iteration method for ICP)
-
Step 1:
Given \(\epsilon> 0\), \(u^{(0)}\in V\), set \(k:=0\).
-
Step 2:
Find the solution \(u^{(k+1)}\):
-
(1)
Calculate the initial vector
$$\begin{aligned} x^{(0)}=\frac{\gamma}{2}\bigl(u^{(k)}-\Omega^{-1} w^{(k)}-m\bigl(u^{(k)}\bigr)\bigr), \end{aligned}$$set j:=0.
-
(2)
Iteratively compute \(x^{(j+1)}\in\mathbb{R}^{n}\) by solving the equations
$$\begin{aligned} (\Omega+F)x^{(j+1)}=Gx^{(j)}+(\Omega-A) \bigl\vert x^{(j)} \bigr\vert -\gamma A m\bigl(u^{(k)}\bigr)-\gamma q, \end{aligned}$$ -
(3)
$$\begin{aligned} u^{(k+1)}=\frac{1}{\gamma}\bigl( \bigl\vert x^{(j+1)} \bigr\vert +x^{(j+1)}\bigr)+m\bigl(u^{(k)}\bigr). \end{aligned}$$
-
(1)
-
Step 3:
If \(\mathrm{RES}= \vert (Au^{(k+1)}+q)^{T}(u^{(k+1)}-m(u^{(k+1)})) \vert <\epsilon\), then stop; otherwise, set \(k:=k+1\) and return to step 2.
Method 3.1 converges to the unique solution of ICP (1.1) under mild conditions and has a faster convergence rate than the classical projection fixed-point iteration methods and the Newton method [1]. However, Method 3.1 cannot be directly applied to solve the ICP (1.1). On one hand, the authors did not specify how to solve \(w^{(k)}\). On the other hand, step 2(2) is actually an inner iteration at the kth outer iteration. The outer iteration information should be presented in the MMS iteration method. To better show how the MMS iteration method works, we give a complete version as follows.
Method 3.2
(The MMS iteration method for ICP)
-
Step 1:
Given \(\epsilon> 0\), \(u^{(0)}\in V\), set \(k:=0\).
-
Step 2:
Find the solution \(u^{(k+1)}\):
-
(1)
Calculate the initial vector
$$\begin{aligned} \begin{aligned} &w^{(k)}=Au^{(k)}+q, \\ & x^{(0, k)}=\frac{\gamma}{2}\bigl(u^{(k)}- \Omega^{-1} w^{(k)}-m\bigl(u^{(k)}\bigr)\bigr), \end{aligned} \end{aligned}$$(3.3)set \(j:=0\).
-
(2)
Iteratively compute \(x^{(j+1, k)}\in\mathbb{R}^{n}\) by solving the equations
$$ (\Omega+F)x^{(j+1, k)}=Gx^{(j, k)}+(\Omega-A) \bigl\vert x^{(j, k)} \bigr\vert -\gamma A m\bigl(u^{(k)}\bigr)- \gamma q. $$(3.4) -
(3)
$$ u^{(k+1)}=\frac{1}{\gamma}\bigl( \bigl\vert x^{(j+1, k)} \bigr\vert +x^{(j+1, k)}\bigr)+m\bigl(u^{(k)} \bigr). $$(3.5)
-
(1)
-
Step 3:
If \(\mathrm{RES}= \vert (Au^{(k+1)}+q)^{T}(u^{(k+1)}-m(u^{(k+1)})) \vert <\epsilon\), then stop; otherwise, set \(k:=k+1\) and return to step 2.
From Method 3.1 or Method 3.2 we can see that the MMS iteration method belongs to a class of inner-outer iteration methods. In general, the convergence rate of inner iteration has great effect on total steps of the outer iteration. However, in actual implementations the inner iterations need not communicate. Note that the number of outer iterations decreases as the number of inner iteration increases. This may lead to the reduction of the total computing time, provided that the decrement of communication time is not less than the increment of computation time for the inner iterations. So, a suitable choice of the number of inner iterations is very important and can greatly improve the computing time for solving the ICP (1.1). To efficiently implement the MMS iteration method, we can fix the number of inner iterations or choose a stopping criterion about residuals of inner iterations at each outer iteration. For the inner iteration implementation aspects of the modulus-based iteration method, we refer to [12, 23] for details.
4 Convergence analysis
In this section, we establish the convergence theory for Method 3.2 when \(A\in\mathbb{R}^{n\times n}\) is a positive-definite matrix and an \(H_{+}\)-matrix, respectively.
To this end, we first assume that there exists a nonnegative matrix \(N\in\mathbb{R}^{n\times n}\) such that
Further assume that \(u^{(*)}\in{V}\) and \(x^{(*)}\) are the solutions of the ICP (1.1) and the implicit fixed-point equation (3.1), respectively. Then by Lemma 3.1 we have the equalities
and
In addition, from Method 3.2 we have
where \(w^{(*)}=Au^{(*)}+q\).
Subtracting (4.1) from (3.5) and taking absolute values on both sides, we obtain
Similarly, subtracting (4.2) from (3.4), we have
where \(\delta_{1}= \vert (\Omega+F)^{-1}(\Omega-F) \vert +2 \vert (\Omega+F)^{-1}G \vert \text{ and } \delta_{2}= \vert (\Omega+F)^{-1}F \vert (I+ \vert F^{-1}G \vert )\). Substituting (4.5) into (4.4), we obtain
where \(\delta_{3}=2\sum _{i=0}^{j}\delta_{1}^{i}\delta_{2}+I\). Similarly, from (3.3) and (4.3) we have
Finally, substituting (4.7) into (4.6), we get
where
Therefore, if \(\rho(Z)<1\), then Method 3.2 converges to the unique solution of the ICP (1.1).
We summarize our discussion in the following theorem.
Theorem 4.1
Suppose that \(A=F-G\) is a splitting, \(\gamma>0\) is a positive constant, and Ω is a positive diagonal matrix. Let Z be defined as in (4.8). If \(\rho(Z)<1\), then the sequence \(\{ u^{(k)}\}_{k=0}^{\infty}\) generated by Method 3.2 converges to the unique solution \(u^{(*)}\) of ICP (1.1) for any initial vector \(u^{(0)}\in{V}\).
In Theorem 4.1 a general sufficient condition is given to guarantee the convergence of the MMS iteration method. However, this condition may be useless for practical computations. In the following two subsections, some specific conditions are given when the system matrix A is positive-definite and an \(H_{+}\)-matrix, respectively.
4.1 The case of positive-definite matrix
Theorem 4.2
Assume that \(A=F-G\) is a splitting of a positive-definite matrix A with \(F\in\mathbb{R}^{n\times n}\) being symmetric positive-definite, \(\Omega=\omega I\in\mathbb{R}^{n\times n}\) with \(\omega>0\), and γ is a positive constant. Denote \(\eta= \Vert (\Omega +F)^{-1}(\Omega-F) \Vert _{2}+2 \Vert (\Omega+F)^{-1}G \Vert _{2}\), \(\lambda= \Vert N \Vert _{2}\), and \(\tau= \Vert F^{-1}G \Vert _{2}\). Suppose that ω satisfies one of the following cases:
-
(1)
when \(\tau^{2}\mu_{\max}<\mu_{\min}\),
$$\begin{aligned} \tau\mu_{\max}< \omega< \sqrt{\mu_{\min} \mu_{\max}}, \end{aligned}$$(4.9) -
(2)
when \(\tau<1\) and \(\tau^{2}\mu_{\max}<\mu_{\min}<\tau \mu_{\max}\),
$$\begin{aligned} \sqrt{\mu_{\min}\mu_{\max}}< \omega< \frac{(1-\tau)\mu_{\min }\mu_{\max}}{\tau\mu_{\max}-\mu_{\min}}, \end{aligned}$$(4.10) -
(3)
when \(\tau\mu_{\max}\leq\mu_{\min}\),
$$\begin{aligned} \omega\geq\sqrt{\mu_{\min}\mu_{\max}}. \end{aligned}$$(4.11)
If \(\lambda<\frac{1-\eta}{3-\eta}\), then for any initial vector \(u^{(0)}\in{V}\), the iteration sequence \(\{u^{(k)}\}_{k=0}^{\infty}\) generated by Method 3.2 converges to the unique solution \(u^{(*)}\) of the ICP (1.1).
Proof
By Theorem 4.1 we just need to derive sufficient conditions for \(\rho(Z)<1\). Based on the definition of Z, we have
where \(\eta= \Vert (\Omega+F)^{-1}(\Omega-F) \Vert _{2}+2 \Vert (\Omega+F)^{-1}G \Vert _{2}\), \(\theta= \Vert I+ \vert \Omega^{-1}A \vert +N \Vert _{2}\), and \(\sigma= \Vert \delta_{3}N \Vert _{2}\).
If \(\Omega=\omega I\in\mathbb{R}^{n\times n}\) is a diagonal and positive-definite matrix and \(F\in\mathbb{R}^{n\times n}\) is a symmetric positive-definite matrix, then it is easy to check that
and
Hence, we have
Similarly to the proof of [13, Theorem 4.2], we know that \(\eta<1\) if the iteration parameter ω satisfies one of conditions (4.9), (4.10), and (4.11). Under those conditions, we have \(\lim _{j\rightarrow\infty}\eta ^{j+1}=0\). Since θ is a constant, we know that, for all \(\epsilon>0\), there exists an integer J such that, for all \(j\geq J\), we have the inequality
By the definition of \(\delta_{3}\) and \(0<\eta<1\), we have
In addition,
It is easy to check that \(\Vert \delta_{2} \Vert _{2}<1\) if \(\omega>\tau\mu _{\max}\). Therefore
By (4.12), (4.13), and (4.14) we can choose \(\epsilon\ll1\) such that, for all \(j\geq J\),
As \(j\rightarrow\infty\), we have \(\rho(Z)<1\), provided that \(\lambda <\frac{1-\eta}{3-\eta}\). This completes the proof. □
Remark 4.1
Although in [1] the modulus-based iteration method was proposed based on a matrix splitting, the authors just considered the following iteration scheme in analyzing the convergence
that is, the convergence of the modulus-based matrix splitting iteration method was not actually proved in [1]. In Theorems 4.1 and 4.2, we give a complete version of the convergence of the MMS iteration method (i.e., Method 3.2). These results generalize those in [1, Theorems 4.1 and 4.2].
4.2 The case of \(H_{+}\)-matrix
In this subsection, we establish the convergence property of the MMS iteration method (i.e., Method 3.2) when \(A\in\mathbb {R}^{n\times n}\) is an \(H_{+}\)-matrix. We obtain a new convergence result.
Theorem 4.3
Let \(A\in\mathbb{R}^{n\times n}\) be an \(H_{+}\)-matrix, and let \(A=F-G\) be an H-compatible splitting of the matrix A, that is, \(\langle A\rangle=\langle F\rangle- \vert G \vert \). Let Ω be a diagonal and positive-definite matrix, and let γ be a positive constant. Denote \(\psi_{1}=(\Omega+\langle F \rangle)^{-1}(2 \vert G \vert + \vert \Omega-F \vert )\) and \(\psi_{2}=(\Omega+\langle F\rangle)^{-1} \vert F \vert (I+ \vert F^{-1}G \vert )\). If
then, for any initial vector \(u^{(0)}\in{V}\), the iteration sequence \(\{u^{(k)}\}_{k=0}^{\infty}\) generated by Method 3.2 converges to the unique solution \(u^{(*)}\) of the ICP (1.1).
Proof
Since \(A=F-G\) is an H-compatible splitting of the matrix A, we have
By Lemmas 2.1 and 2.2 we know that \(F\in\mathbb {R}^{n\times n}\) is an \(H_{+}\)-matrix and
For this case, (4.5) can be somewhat modified as
Similarly to analysis of Theorem 4.1 with only technical modifications, we obtain
where \(\hat{Z}=\psi_{1}^{j+1}(I+ \vert \Omega^{-1}A \vert +N)+\psi_{3} N\) and \(\psi _{3}=2\sum _{i=0}^{j}\psi_{1}^{i}\psi_{2}+I\).
Now, we turn to study the conditions for \(\rho(\hat{Z})<1\) that guarantee the convergence of the MMS iteration method. Based on the definition of Ẑ, we have
From [13, Theorem 4.3] we know that \(\rho(\psi_{1})<1\) if the parameter matrix Ω satisfies \(\Omega\geq\frac {1}{2}\operatorname{diag}(F)\). By Lemma 2.3 we have \(\lim _{j\rightarrow\infty}\psi_{1}^{j+1}=0\). Besides, θ is a positive constant. Thus, for any \(\epsilon_{1}>0\) (without loss of generality, \(\epsilon_{1}\ll1\)), there exists an integer \(j_{0}\) such that, for all \(j\geq j_{0}\),
Therefore, for all \(j\geq j_{0}\), we have
As \(j\rightarrow\infty\), we have \(\rho(\hat{Z})<1\) if \((\frac{2 \Vert \psi_{2} \Vert _{2}}{1- \Vert \psi_{1} \Vert _{2}}+1)\lambda<1\). This completes the proof. □
5 Conclusions
In this paper, we have studied a class of modulus-based matrix splitting (MMS) iteration methods proposed in [1] for solving implicit complementarity problem (1.1). We have modified implementation of the MMS iteration method. In addition, we have demonstrated a complete version of the convergence theory of the MMS iteration method. We have obtained new convergence results when the system matrix A is a positive-definite matrix and an \(H_{+}\)-matrix, respectively.
References
Hong, J-T, Li, C-L: Modulus-based matrix splitting iteration methods for a class of implicit complementarity problems. Numer. Linear Algebra Appl. 23, 629-641 (2016)
Pang, JS: The Implicit Complementarity Problem. Mangasarian, OL, Meyer, RR, Robinson, SM (eds.): Nonlinear Programming 4. Academic Press, New York (1981)
Ferris, MC, Pang, JS: Engineering and economic applications of complementarity problems. SIAM Rev. 39, 669-713 (1997)
Billups, SC, Murty, KG: Complementarity problems. J. Comput. Appl. Math. 124, 303-328 (2000)
Cottle, RW, Pang, JS, Stone, RE: The Linear Complementarity Problem. SIAM, Philadelphia (2009)
Pang, JS: On the convergence of a basic iterative method for the implicit complementarity problems. J. Optim. Theory Appl. 37, 149-162 (1982)
Noor, M: Fixed point approach for complementarity problems. J. Optim. Theory Appl. 133, 437-448 (1988)
Zhan, W-P, Zhou, S, Zeng, J-P: Iterative methods for a kind of quasi-complementarity problems. Acta Math. Appl. Sin. 23, 551-556 (2000)
Yuan, Q, Yin, H-Y: Stationary point of the minimization reformulations of implicit complementarity problems. Numer. Math. J. Chin. Univ. 31, 11-18 (2009)
Schäfer, U: On the modulus algorithm for the linear complementarity problem. Oper. Res. Lett. 32, 350-354 (2004)
Mangasarian, OL, Meyer, RR: Absolute value equations. Linear Algebra Appl. 419, 359-367 (2006)
Dong, J-L, Jiang, M-Q: A modified modulus method for symmetric positive-definite linear complementarity problems. Numer. Linear Algebra Appl. 16, 129-143 (2009)
Bai, Z-Z: Modulus-based matrix splitting iteration methods for linear complementarity problems. Numer. Linear Algebra Appl. 17, 917-933 (2010)
Bai, Z-Z, Zhang, L-L: Modulus-based synchronous multisplitting iteration methods for linear complementarity problems. Numer. Linear Algebra Appl. 20, 425-439 (2013)
Zheng, N, Yin, J-F: Accelerated modulus-based matrix splitting iteration methods for linear complementarity problem. Numer. Algorithms 64, 245-262 (2013)
Zheng, N, Yin, J-F: Convergence of accelerated modulus-based matrix splitting iteration methods for linear complementarity problem with an \(H_{+}\)-matrix. J. Comput. Appl. Math. 260, 281-293 (2014)
Zhang, L-L, Zhang, Y-P, Ren, Z-R: New convergence proofs of modulus-based synchronous multisplitting iteration methods for linear complementarity problems. Linear Algebra Appl. 481, 83-93 (2015)
Dong, J-L, Gao, J-B, Ju, F-J, Shen, J-H: Modulus methods for nonnegatively constrained image restoration. SIAM J. Imaging Sci. 9, 1226-1446 (2016)
Bai, Z-Z: On the convergence of the multisplitting methods for linear complementarity problem. SIAM J. Matrix Anal. Appl. 21, 67-78 (1999)
Frommer, A, Szyld, DB: H-splittings and two-stage iterative methods. Numer. Math. 63, 345-356 (1992)
Frommer, A, Mayer, G: Convergence of relaxed parallel multisplitting methods. Linear Algebra Appl. 119, 141-152 (1989)
Varga, RS: Matrix Iterative Analysis, 2nd edn. Springer, Berlin (2000)
Zhang, L-L: Two-stage multisplitting iteration methods using modulus-based matrix splitting as inner iteration for linear complementarity. J. Optim. Theory Appl. 160, 189-203 (2014)
Acknowledgements
This work is supported by the National Natural Science Foundation of China (Nos. 11771225, 61771265), the Natural Science Foundation of Jiangsu Province (No. BK20151272), the ‘333’ Program Talents of Jiangsu Province (No. BRA2015356), and the Postgraduate Research & Practice Innovation Program of Jiangsu Province (No. KYCX17_1905).
Author information
Authors and Affiliations
Contributions
The authors contributed equally to this work. All authors have read and approved the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Wang, A., Cao, Y. & Shi, Q. Convergence analysis of modulus-based matrix splitting iterative methods for implicit complementarity problems. J Inequal Appl 2018, 2 (2018). https://doi.org/10.1186/s13660-017-1593-7
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-017-1593-7
MSC
- 65F10
Keywords
- implicit complementarity problem
- matrix splitting
- modulus-based iterative method
- convergence