To solve the large sparse linear system of equations

$$ Ax=b, \quad A\in\mathbf{R}^{n\times n} \mbox{ nonsingular and } b\in \mathbf{R}^{n}, $$

(1.1)

O’Leary and White [1] first proposed parallel methods based on multisplittings of matrices in 1985, where several basic convergence results may be found. A multisplitting of *A* is a collection of triples of \(n\times n\) matrices \((M_{i},N_{i},E_{i})_{i=1}^{\alpha}\) (\(\alpha\leq n\), a positive integer) with \(M_{i},N_{i},E_{i}\in\mathbf{R}^{n\times n}\), and then the following method for solving system (1.1) was given:

$$\begin{aligned}& A=M_{i}-N_{i} ,\quad i=1,2,\ldots,\alpha, \\& M_{i}x_{i}^{(k)}=N_{i}x^{(k-1)}+b, \quad i=1,2,\ldots,\alpha; k=1,2,\ldots, \\& x^{(k)}=\sum_{i=1}^{\alpha}E_{i}x_{i}^{(k)}, \end{aligned}$$

where \(M_{i}\), \(i=1,2,\ldots,\alpha\), are nonsingular and \(E_{i}=\operatorname{diag}(e_{1}^{(i)}, e_{2}^{(i)}, \ldots, e_{n}^{(i)})\), \(i=1,2,\ldots,\alpha\), satisfy \(\sum_{i=1}^{\alpha}E_{i}=I\) (\(I\in\mathbf{R}^{n\times n}\) is the identity matrix). By the way, \(x^{(0)}\) is an initial vector of \(x_{*}=A^{-1}b\).

There has been a lot of study (see [1–13]) on the parallel iteration methods for solving the large sparse system of linear equations (1.1). In particular, when the coefficient matrix *A* is an *M*-matrix or an *H*-matrix, many parallel multisplitting iterative methods (see, e.g., [1–7, 10]) were presented, and the weighting matrices \(E_{i}\), \(i=1,2,\ldots,\alpha\), were generalized (see, e.g., [3, 4, 6–10])

$$ \sum_{i=1}^{\alpha}E_{i}^{(k)}=I \ (\mbox{or }\neq I),\quad E_{i}^{(k)}\geq0, \mbox{diagonal}, i=1,2,\ldots,\alpha; k=1,2,\ldots, $$

(1.2)

but these weighting matrices were preset as multi-parameter.

As we know, the weighting matrices play an important role in parallel multisplitting iterative methods, but the weighting matrices in all the above-mentioned methods are determined in advance, they are not known to be good or bad, and this influences the efficiency of parallel methods. Recently, Wen and co-authors [13] discussed self-adaptive weighting matrices for a symmetric positive definite linear system of equations; Wang and co-authors [11] discussed self-adaptive weighting matrices for non-Hermitian positive definite linear system of equations. In this paper, we focus on the *H*-matrix, which originates from Ostrowski [14]. The *H*-matrix is a class of the important matrices that has many applications. For example, numerical methods for solving PDEs are a source of many linear systems of equations whose coefficients form *H*-matrices (see [15–18]). Are self-adaptive weighting matrices true for the linear system of equations when the coefficient matrix is an *H*-matrix? We will discuss this problem in this paper.

Here, we generalize the weighting matrices \(E_{i}\) (\(i=1,2,\ldots,\alpha\)) to \(E_{i}^{(k)}\) (\(i=1,2,\ldots,\alpha\); \(k=1,2, \ldots\)), which can be divided into two steps: each splitting is first dealt with by a different processor, the weighting matrices \(E_{i}^{(k)}\) (\(i=1,2,\ldots,\alpha\); \(k=1,2,\ldots\)) will mask large portion of the matrix, so that each processor deals with a smaller matrix; later, the weighting matrices \(E_{i}^{(k)}\) (\(i=1,2,\ldots,\alpha\); \(k=1,2, \ldots\)) will be approximated to ‘the best’ choices well for *k*-step iteration in the hyperplane generated by \(\{x_{i}^{(k)}, i=1,2, \ldots, \alpha\}\), \(k=1,2, \ldots \) . In this paper, we determine the weighting matrices \(E_{i}^{(k)}\) (\(i=1,2,\ldots,\alpha\); \(k=1,2, \ldots\)) for above reasons. Firstly, decreasing execution times is completed by the zero-entry’s set in weighting matrices. Secondly, the self-adaptive weighting matrices are reached by finding the ‘good’ point in the hyperplane generated by \(\{ x_{i}^{(k)}, i=1,2, \ldots, \alpha\}\), \(k=1,2, \ldots \) . Thus, the scheme of finding the self-adaptive weighting matrices is established for the parallel multisplitting iterative method, it has two advantages as follows:

In the rest of this study, we first give some notations and preliminaries in Section 1, and then a parallel multisplitting iterative method with the self-adaptive weighting matrices is put forward in Section 2. The convergence of the parallel multisplitting iterative method is established in Section 3. Moreover, we give the computational results of the parallel multisplitting iterative method by a problem in Section 4. We end the paper with a conclusion in Section 5.

Here are some essential notations and preliminaries. \(\mathbf{R}^{n\times n}\) is used to denote the \(n\times n\) real matrix set, and \(\mathbf{R}^{n}\) is the *n*-dimensional real vector set. \(A^{T}\) represents the transpose of the matrix *A*, and \(x^{T}\) denotes the transpose of the vector *x*. \(\langle A\rangle\) stands for the comparison (Ostrowski) matrix of the matrix *A*, and \(|A|\) is the absolute matrix of the matrix *A*.

In what follows, when *A* is a strictly diagonally dominant matrix in column, *A* is called a strictly diagonally dominant matrix.

### Definition 1.1

[17]

The matrix *A* is an *H*-matrix if there exists a positive diagonal matrix *D* such that the matrix *DA* is a strictly diagonally dominant matrix.

### Property 1.2

[14]

The matrix *A* is an *H*-matrix if and only if \(\langle A\rangle\) is an *M*-matrix.

### Definition 1.3

[8, 12]

Suppose that *A* is an *H*-matrix. Let \(A=M-N\), which is called an *H*-compatible splitting if \(\langle A\rangle =\langle M\rangle-|N|\).

### Property 1.4

[15]

Let *A* be an *H*-matrix, then \(|A^{-1}|\le\langle A\rangle^{-1}\).