A parallel multisplitting method with selfadaptive weightings for solving Hmatrix linear systems
 Ruiping Wen^{1}Email author and
 Hui Duan^{2}
https://doi.org/10.1186/s1366001713707
© The Author(s) 2017
Received: 17 January 2017
Accepted: 19 April 2017
Published: 1 May 2017
Abstract
In this paper, a parallel multisplitting iterative method with the selfadaptive weighting matrices is presented for the linear system of equations when the coefficient matrix is an Hmatrix. The zero pattern in weighting matrices is determined in advance, while the nonzero entries of weighting matrices are determined by finding the optimal solution in a hyperplane of α points generated by the parallel multisplitting iterations. Especially, the nonnegative restriction of weighting matrices is released. The convergence theory is established for the parallel multisplitting method with selfadaptive weightings. Finally, a numerical example shows that the parallel multisplitting iterative method with the selfadaptive weighting matrices is effective.
Keywords
1 Introduction and preliminaries
As we know, the weighting matrices play an important role in parallel multisplitting iterative methods, but the weighting matrices in all the abovementioned methods are determined in advance, they are not known to be good or bad, and this influences the efficiency of parallel methods. Recently, Wen and coauthors [13] discussed selfadaptive weighting matrices for a symmetric positive definite linear system of equations; Wang and coauthors [11] discussed selfadaptive weighting matrices for nonHermitian positive definite linear system of equations. In this paper, we focus on the Hmatrix, which originates from Ostrowski [14]. The Hmatrix is a class of the important matrices that has many applications. For example, numerical methods for solving PDEs are a source of many linear systems of equations whose coefficients form Hmatrices (see [15–18]). Are selfadaptive weighting matrices true for the linear system of equations when the coefficient matrix is an Hmatrix? We will discuss this problem in this paper.

The weighting matrices are not necessarily nonnegative;

Only one splitting of α splittings is required to be convergent.
In the rest of this study, we first give some notations and preliminaries in Section 1, and then a parallel multisplitting iterative method with the selfadaptive weighting matrices is put forward in Section 2. The convergence of the parallel multisplitting iterative method is established in Section 3. Moreover, we give the computational results of the parallel multisplitting iterative method by a problem in Section 4. We end the paper with a conclusion in Section 5.
Here are some essential notations and preliminaries. \(\mathbf{R}^{n\times n}\) is used to denote the \(n\times n\) real matrix set, and \(\mathbf{R}^{n}\) is the ndimensional real vector set. \(A^{T}\) represents the transpose of the matrix A, and \(x^{T}\) denotes the transpose of the vector x. \(\langle A\rangle\) stands for the comparison (Ostrowski) matrix of the matrix A, and \(A\) is the absolute matrix of the matrix A.
In what follows, when A is a strictly diagonally dominant matrix in column, A is called a strictly diagonally dominant matrix.
Definition 1.1
[17]
The matrix A is an Hmatrix if there exists a positive diagonal matrix D such that the matrix DA is a strictly diagonally dominant matrix.
Definition 1.3
Suppose that A is an Hmatrix. Let \(A=MN\), which is called an Hcompatible splitting if \(\langle A\rangle =\langle M\rangleN\).
2 Description of the method
Here, we present a parallel multisplitting iterative method with the selfadaptive weighting matrices.
Method 2.1
 Step 0.:

Given a tolerance \(\epsilon>0\) and an initial vector \(x^{(0)}\). For \(i=1,2, \ldots, \alpha\); \(k=1,2, \ldots \) , give also the set \(S(i,k)\) and for some \(i_{0}\), \(S(i_{0},k)=\{1,2,\ldots,n\}\). The sequence of \(x^{(k)}\), \(k=1,2,\ldots\) , until converges.
 Step 1.:

Solution in parallel. For the ith processor, \(i=1,2,\ldots,\alpha\): compute \(x_{j}^{(i,k)}\), \(j\in S(i,k)\) bywhere \(x^{(i,k)}=(x_{1}^{(i,k)},\ldots,x_{n}^{(i,k)})^{T}\).$$ M_{i}x^{(i,k)}=N_{i}x^{(i,k1)}+b, $$(2.6)
 Step 2.:

For the \(i_{0}\)th processor, setSolve the following optimization problem:$$x=\sum_{i=1}^{\alpha}E_{i}^{(k)}x_{i}^{(k)}= \left ( \textstyle\begin{array}{@{}c@{}} \sum_{i=1}^{\alpha}e_{1}^{(i,k)}x_{1}^{(i,k)} \\ \sum_{i=1}^{\alpha}e_{2}^{(i,k)}x_{2}^{(i,k)}\\ \vdots\\ \sum_{i=1}^{\alpha}e_{n}^{(i,k)}x_{n}^{(i,k)} \end{array}\displaystyle \right ). $$Or$$\begin{aligned}& \min_{e_{j}^{(i,k)}\in S(k)}\Axb\_{1} \\& \quad \mbox{s.t. } \sum_{i=1}^{\alpha}E_{i}^{(k)}=I. \end{aligned}$$(2.7)$$\begin{aligned}& \Axb\_{1}\le\bigl\ Ax_{i_{0}}^{(k)}b\bigr\ _{1} \\& \quad \mbox{s.t. } \sum_{i=1}^{\alpha}E_{i}^{(k)}=I. \end{aligned}$$(2.8)
 Step 3.:

Compute$$ x^{(k)}=\sum_{i=1}^{\alpha}E_{i}^{(k)}x_{i}^{(k)}. $$(2.9)
 Step 4.:

If \(\Ax^{(k)}b\_{1}\leq\epsilon\), stop; otherwise, \(k\Leftarrow k+1\), go to Step 1.
Remark 2.1
The implementation of this method is that at each iteration there are α independent problems of the kind (2.6) with \(x_{j}^{(i,k)}\), \(j\in S(i,k)\) represents the solution to the local problem. The work for each equation in (2.6) is assigned to one processor, and communication is required only to produce the update given in (2.9). In general, some (most) of the diagonal elements in \(E_{i}\) are zero and therefore the corresponding components of \(x_{j}^{(i,k)}\), \(j\in S(i,k)\) need not be calculated.
Remark 2.2
We may use some optimization methods such as the simplex method (see [19]) to solve an approximated solution satisfying inequality (2.7). Usually, we can compute the optimal weighting at every two or three iterations to replace at each iteration step. Hence, the computational complexity of (2.7) is about \(4n^{2}\) or \(6n^{2}\) flops.
Assumptions
 (a)
\(b_{j}\geq0\), \(j=1,2,\ldots,n\).
 (b)
\(\frac{b_{1}}{a_{1}}\leq \frac{b_{2}}{a_{2}}\leq\cdots\leq\frac{b_{n}}{a_{n}}\).
Lemma 2.1
Let programming (2.10) satisfy Assumptions and \(x_{j}=\frac{b_{j}}{a_{j}}\), \(j=1,2,\ldots,n\). Then there exists some \(j_{0}\) such that \(x_{j_{0}}\) is the solution of programming (2.10).
Proof
As we know, the solution \(x^{*}\in[\frac{b_{1}}{a_{1}},\frac{b_{n}}{a_{n}}]\). Let \(P=\{\frac{b_{1}}{a_{1}}, \frac{b_{2}}{a_{2}},\ldots,\frac{b_{k}}{a_{k}},\ldots,\frac{b_{n}}{a_{n}}\}\) be a partition of \([\frac{b_{1}}{a_{1}},\frac{b_{n}}{a_{n}}]\), it implies that \(\frac{b_{1}}{a_{1}}< \frac{b_{2}}{a_{2}}<\cdots<\frac{b_{k}}{a_{k}}<\cdots<\frac{b_{n}}{a_{n}}\). Thus, we obtain a set of subinterval induced by the partition P.
In every subinterval, the function \(\sum_{j=1}^{n}b_{j}a_{j}x\) is a linear function, then the minimization point of the linear function is just a partition point. Hence, the lemma has been proved. □
Corollary 2.2
From Lemma 2.1 we obtain an approximated solution, its complexity is about \(4n^{2}\) flops.
3 Convergence analysis
In this section, we discuss the convergence of Method 2.1 under the reasonable assumptions.
Lemma 3.1
Proof
From the definition of Hcompatible splitting, we know that \(\langle A\rangle=\langle M\rangleN\).
Lemma 3.2
Proof
Theorem 3.3
Let \(A=M_{i}N_{i}\), \(i=1,2,\ldots,\alpha\), be α splittings of the Hmatrix A, and for some \(i_{0}\), let \(A=M_{i_{0}}N_{i_{0}}\) be Hcompatible splitting. Assume that \(E_{i}^{(k)}\), \(i=1,2,\ldots, \alpha\); \(k=1,2,\ldots \) are yielded by (2.7) or (2.8) in Method 2.1. Then \(\{x^{(k)}\} \) generated by Method 2.1 converges to the unique solution \(x_{*}\) of (1.1).
Proof
4 Numerical experiments
In this section, a test problem to assess the feasibility and effectiveness of Method 2.1 in terms of both iteration number (denoted by IT) and computing time (in seconds, denoted by CPU) is given. All our tests are started from zero vector and terminated when the current iterate satisfied \(\r^{(k)}\_{1}<10^{6}\), where \(r^{(k)}\) is the residual of the current, say kth iteration or the number of iterations is up to 20,000. For the latter the iteration is failing. We solve (2.7) or (2.8) in the optimization step by the simplex method (see [19]).
Problem
 (a)The block Jacobi splitting (denoted by BJ)$${A}={M}_{1}{N}_{1}, \qquad {M}_{1}={D}. $$
 (b)The block GaussSeidel splitting I (denoted by BGSI)$${A}={M}_{2}{N}_{2},\qquad {M}_{2}= {D}{L}. $$
 (c)The block GaussSeidel splitting II (denoted by BGSII)$${A}={M}_{2}{N}_{2},\qquad {M}_{2}= {D}{U}. $$
Radii of the classical iterative methods
n  BJ  BGSI  BGSII 

32 × 32  0.9923  0.984  0.9847 
64 × 64  0.9980  0.9960  0.9961 
128 × 128  0.9995  0.9990  0.9991 
The comparison of computational results among the BGSI, BGSII, Method 2.1
n  BGSI  BGSII  Method 2.1  

32 × 32  IT  866  869  653 
CPU(s)  3.465  3.838  2.5130  
64 × 64  IT  3,353  3,358  2,314 
CPU(s)  64.493  78.830  23.2740  
128 × 128  IT  13,191  13,202  6,799 
CPU(s)  1,203.512  1,367.604  365.7850 
The comparison of computational results between Method 2.1 and Basic Method
n  Method 2.1  BMeth 1  BMeth 2  BMeth 3  

32 × 32  IT  653  1,308  1,226  1,175 
CPU(s)  2.5130  4.4435  3.3955  3.1638  
64 × 64  IT  2,314  4,571  4,601  4,489 
CPU(s)  23.2740  57.1624  57.1404  55.3278  
128 × 128  IT  6,799  17,865  17,662  17,877 
CPU(s)  365.7850  1,048.6420  1,047.1182  1,040.2860 
Here, we denote the basic parallel multisplitting iterative method with fixed weighting matrix by BMeth (see [1]). In Basic Methods, we propose three groups of weighting matrices, which are generated by random selection. Thus, the corresponding parallel multisplitting iterative methods are denoted by BMeth 1, BMeth 2, BMeth 3, respectively.
From the above numerical experiments, it is obtained that the average speedup of the new parallel multisplitting iterative method (Method 2.1) is about 2.3 (the average value of all computational results by Basic Methods).
5 Conclusion
The parallel multisplitting iterative method with the selfadaptive weighting matrices has been proposed for the linear system of equations (1.1) when the coefficient matrix is an Hmatrix. The convergence theory is established for the parallel multisplitting method with selfadaptive weightings. The numerical results show that the new parallel multisplitting iterative method with the selfadaptive weightings is effective.
Declarations
Acknowledgements
The authors are very much indebted to the anonymous referees for their helpful comments and suggestions. And the authors are so thankful for the support from the NSF of Shanxi Province. This work is supported by grants from the NSF of Shanxi Province (201601D011004).
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 O’Leary, DP, White, R: Multisplittings of matrices and parallel solutions of linear systems. SIAM J. Algebraic Discrete Methods 6, 630640 (1985) MathSciNetView ArticleMATHGoogle Scholar
 Bai, ZZ: Parallel matrix multisplitting block relaxation iteration methods. Math. Numer. Sin. 17, 238252 (1995) MathSciNetMATHGoogle Scholar
 Bai, ZZ: On the convergence of additive and multiplicative splitting iterations for systems of linear equations. J. Comput. Appl. Math. 154, 195214 (2003) MathSciNetView ArticleMATHGoogle Scholar
 Bai, ZZ: On the convergence of parallel nonstationary multisplitting iteration methods. J. Comput. Appl. Math. 159, 111 (2003) MathSciNetView ArticleMATHGoogle Scholar
 Bai, ZZ: A new generalized asynchronous parallel multisplitting iteration method. J. Comput. Math. 17, 449456 (1999) MathSciNetMATHGoogle Scholar
 Evans, DJ: Blockwise matrix multisplitting multiparameter block relaxation methods. Int. J. Comput. Math. 64, 103118 (1997) MathSciNetView ArticleGoogle Scholar
 Sun, JC, Wang, DR: A unified framework for the construction of various matrix multisplitting iterative methods for large sparse system of linear equations. Comput. Math. Appl. 32, 5176 (1996) MathSciNetMATHGoogle Scholar
 Frommer, A, Mayer, G: Convergence of relaxed parallel multisplitting methods. Linear Algebra Appl. 119, 141152 (1989) MathSciNetView ArticleMATHGoogle Scholar
 Neumann, M, Plemmons, RJ: Convergence of parallel multisplitting iterative methods for Mmatrices. Linear Algebra Appl. 88, 559573 (1987) MathSciNetView ArticleMATHGoogle Scholar
 Szyld, DB, Jones, MT: Twostage and multisplitting methods for the parallel solution of linear systems. SIAM J. Matrix Anal. Appl. 13, 671679 (1992) MathSciNetView ArticleMATHGoogle Scholar
 Wang, CL, Meng, GY, Yong, XR: Modified parallel multisplitting iterative methods for nonHermitian positive definite systems. Adv. Comput. Math. 38, 859872 (2013) MathSciNetView ArticleMATHGoogle Scholar
 Wang, DR: On the convergence of the parallel multisplitting AOR algorithm. Linear Algebra Appl. 154156, 473486 (1991) MathSciNetView ArticleMATHGoogle Scholar
 Wen, RP, Wang, CL, Yan, XH: Generalizations of nonstationary multisplitting iterative method for symmetric positive definite linear systems. Appl. Math. Comput. 216, 17071714 (2010) MathSciNetView ArticleMATHGoogle Scholar
 Varga, RS: On recurring theorems on diagonal dominance. Linear Algebra Appl. 13, 19 (1976) MathSciNetView ArticleMATHGoogle Scholar
 Berman, A, Plemmons, RJ: Nonnegative Matrices in the Mathematical Sciences. Classics in Applied Mathematics. SIAM, Philadelphia (1994) View ArticleMATHGoogle Scholar
 Hirsh, RS, Rudy, DH: The role of diagonal dominance and cell Reynolds number in implicit difference methods for fluid mechanics. J. Comput. Phys. 16, 304310 (1974) MathSciNetView ArticleMATHGoogle Scholar
 Li, BS: Generalizations of diagonal dominance in matrix theory. A thesis submitted to the Faculty of Graduate Studies and Research in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy in Mathematics and Statistics, University of Regina (1997) Google Scholar
 Meijerink, JA, Van De Vorst, HA: Iterative Solution of Linear Systems Arising from Discrete Approximations to Partial Differential Equations. Academisch Computer Centrum, Utrecht (1974) Google Scholar
 Nelder, JA, Mead, R: A simplex method for function minimization. J. Comput. 7, 308313 (1965) MathSciNetView ArticleMATHGoogle Scholar