Skip to main content

On block diagonal-Schur complements of the block strictly doubly diagonally dominant matrices

Abstract

As is well known, the diagonal-Schur complements of strictly diagonally dominant matrices are strictly diagonally dominant. In this paper, we verify the block diagonal-Schur complements of I-block strictly doubly diagonally dominant matrices are I-block strictly doubly diagonally dominant matrices, the same is true for II-block strictly doubly diagonally dominant matrices. The theoretical analysis is supported by numerical experiments.

1 Introduction

The Schur complements and the diagonal-Schur complements have appeared to be useful tools in the study of matrix theory (see e.g., [1–3]), linear control theory (see e.g., [4]), numerical analysis (see e.g., [5–9]) and statistics (see e.g., [10, 11]), and so on. At the same time, given a matrix family, it is always interesting to see whether some important properties or structures of the family of matrices are inherited by the submatrices or by the matrices associated with the original matrices. These heritable properties have been used for the convergence of iterations in numerical analysis (see e.g., [12]).

A great deal of classic works on the relationships of the Schur complements and the diagonal-Schur complements with the original matrices have been contributed, for a complete survey of these works we refer to (see e.g., [12]). As is shown in [1, 2], the Schur complements of positive semidefinite matrices are positive semidefinite and the Schur complements of strictly diagonally dominant matrices are strictly diagonally dominant, the same is true for M-matrices, H-matrices, inverse M-matrices, strictly doubly diagonally dominant matrices and generalized strictly diagonally dominant. In addition, Liu and Huang [3] proposed that the diagonal-Schur complements of strictly diagonally dominant matrices are strictly diagonally dominant, the same is true for strictly γ-diagonally dominant matrices and strictly product γ-diagonally dominant matrices.

As regards the block matrix, the concept of a diagonally dominant matrix is extended and two kinds of block diagonally dominant matrices are proposed, i.e., I-block diagonally dominant matrices [13] and II-block diagonally dominant matrices [13]. Later, on the basis of previous works, two kinds of generalized block strictly diagonally dominant matrices are also presented, in other words, I-block H-matrices [14] and II-block H-matrices [15]. Based on the above results, it is easy to see that a block diagonally dominant matrix is not always a diagonally dominant matrix; an example is given in [16], (2.6). Subsequently, Zhang et al. [17] showed that the Schur complement of I-(generalized) block diagonally dominant matrix is I-(generalized) block diagonally dominant, the same is true for II-(generalized) block diagonally dominant matrix.

Let \(C^{n\times n}\) be the set of all \(n\times n \) complex matrices. Suppose \(A\in C^{n\times n}\), \(N=\{1,2,\ldots, n\}\), and \(|\alpha|\) equals the cardinality of α. For nonempty index sets \(\alpha, \beta\subseteq N\), we denote by \(A(\alpha,\beta)\) the submatrix of \(A\in{C^{n\times{n}}}\) lying in the rows indicated by α and the columns indicated by Î². The submatrix \(A(\alpha,\alpha)\) is abbreviated to \(A(\alpha)\). Let \(x^{T}\) be the transpose of the vector x and \(I_{n}\) be the \(n\times n\) unit matrix.

Let \(A=(a_{ij})\in C^{n\times n}\). An \(n\times n\) matrix A is strictly diagonally dominant (abbreviated \(\operatorname{SD}_{n}\)), if

$$|a_{ii}|>P_{i}(A),\quad P_{i}(A)=\sum _{j=1, j\neq i}|a_{ij}|,\quad i=1,2,\ldots,n. $$

An \(n\times n\) matrix A is strictly generalized diagonally dominant (abbreviated \(\operatorname{SGD}_{n}\)), if there exists \(D=\operatorname{diag}(d_{1},\ldots, d_{n})>0\), such that AD is strictly diagonally dominant.

An \(n\times n\) matrix A is strictly doubly diagonally dominant (abbreviated \(\operatorname{SDD}_{n}\)), if

$$|a_{ii}a_{jj}|> P_{i}(A)P_{j}(A),\quad \forall1\leq i< j\leq n. $$

An \(n\times n\) matrix A is generalized strictly doubly diagonally dominant (abbreviated \(\operatorname{SGDD}^{N_{1},N_{2}}_{n}\)), with \(N_{1}\cup N_{2}=N\), \(N_{1}\cap N_{2}=\emptyset \), and ∅ denoting the empty set, for all \(i\in N_{1}\) and \(j\in N_{2}\), if

$$\bigl(|a_{ii}|-\gamma_{i} \bigr) \bigl(|a_{jj}|- \delta_{j} \bigr)> \gamma_{j} \delta_{i}, $$

where

$$\gamma_{s}=\sum_{t\in N_{1}, t\neq s}|a_{st}|, \qquad\delta_{s}=\sum_{t\in N_{2}, t\neq s}|a_{st}|, \quad\mbox{with } s=i \mbox{ or } j. $$

Let \(R^{n\times n}\) be the set of all \(n\times n \) real matrices. For \(A=(a_{ij})\in{R^{m\times{n}}}\) and \(B=(b_{ij})\in{R^{m\times{n}}}\), we write \(A\geq B\), if \(a_{ij}\geq b_{ij}\) for all i, j. A real \(n\times n\) matrix A is called an M-matrix if \(A=sI-B\), where \(s\geq0\), \(B\geq0 \), and \(s>\rho(B)\), \(\rho(B)\) is the spectral radius of B. Let \(M_{n}\) denote the set of \(n\times n\) M-matrices.

Suppose \(A \in C^{n\times n}\), the comparison matrix \(\mu(A)=(\mu_{ij})\), is defined by

$$\mu_{ij}=\left \{ \begin{array}{@{}l@{\quad}l} -|a_{ij}|,& i\neq j, \\ |a_{ij}|, & i=j. \end{array} \right . $$

A complex \(n\times n\) matrix A is called an H-matrix if \(\mu(A) \in M_{n}\). By \(H_{n}\) is denoted the set of \(n\times n\) H-matrices.

In this paper, we propose the concept of the block diagonal-Schur complement on block matrices and study the properties on the diagonal-Schur complement of two kinds of (generalized) block doubly diagonally dominant matrices.

2 Definitions and lemmas

Consider an \(n\times n\) complex matrix A. Let s (\(1\leq s\leq n\)) be an arbitrary natural number and A be partitioned into the following form:

$$ A= \begin{pmatrix} A(\alpha_{1},\alpha_{1}) & A(\alpha_{1},\alpha_{2}) & \cdots& A(\alpha _{1},\alpha_{s})\\ A(\alpha_{2},\alpha_{1}) & A(\alpha_{2},\alpha_{2}) & \cdots& A(\alpha _{2},\alpha_{s}) \\ \vdots& \vdots& \ddots& \vdots\\ A(\alpha_{s},\alpha_{1}) & A(\alpha_{s},\alpha_{2}) & \cdots& A(\alpha_{s},\alpha_{s}) \end{pmatrix}, $$
(1)

where \(\alpha_{0}=\emptyset\) and

$$\alpha_{i}= \Biggl\{ \sum^{i-1}_{t=0}| \alpha_{t}|+1,\ldots,\sum^{i}_{t=0}| \alpha_{t}| \Biggr\} ,\quad i=1,2,\ldots,s,\sum ^{s}_{t=0}|\alpha_{t}|=n, $$

with \(A(\alpha_{t},\alpha_{t})\) being a \(|\alpha_{t}|\times|\alpha_{t}|\) nonsingular principal submatrix of A, \(t=1,2,\ldots,s\).

Let \(C^{n\times n}_{s}\) be the set of all \(s\times s\) block matrices in \(C^{n\times n}\) partitioned as (1). Suppose \(A=(A(\alpha_{l},\alpha_{m}))_{s\times s} \in C^{n\times n}_{s}\) and let \(N(A)=(\|A(\alpha_{l},\alpha_{m})\|)_{s\times s}\) denote the norm matrix of block matrix A.

Let \(\alpha\subset N\), \(\alpha^{c}=N-\alpha\), and \(A(\alpha)\) be nonsingular. The Schur complement of \(A(\alpha)\) in A is defined by

$$ A/A(\alpha)=A/\alpha=A\bigl(\alpha^{c}\bigr)-A\bigl( \alpha^{c},\alpha\bigr)\bigl[A(\alpha )\bigr]^{-1}A\bigl( \alpha,\alpha^{c}\bigr), $$
(2)

and the block diagonal-Schur complement of \(A(\alpha)\) in A is defined by

$$ A/_{\circ} A(\alpha)=A/_{\circ}\alpha=A\bigl( \alpha^{c}\bigr)- \bigl\{ A\bigl(\alpha^{c},\alpha \bigr) \bigl[A(\alpha)\bigr]^{-1}A\bigl(\alpha,\alpha^{c}\bigr) \bigr\} \circ E(\alpha), $$
(3)

where ‘∘’ denotes the Kronecker product symbol and

$$\begin{aligned}& \alpha=\alpha_{i_{1}}\cup\alpha_{i_{2}}\cup\cdots\cup \alpha_{i_{k}},\qquad \alpha^{c}=\alpha_{j_{1}}\cup \alpha_{j_{2}}\cup\cdots\cup\alpha_{j_{l}}, \\& \quad i_{1}< i_{2}<\cdots<i_{k}, j_{1}<j_{2}<\cdots<j_{l}, k+l=s, \\& E_{|\alpha|}= \begin{pmatrix} E_{|\alpha_{1}|} & & & \\ & E_{|\alpha_{2}|} & & \\ & & \ddots& \\ & & & |\alpha_{s}| \end{pmatrix},\qquad E_{|\alpha_{i}|}= \left.\begin{pmatrix} 1 & 1 & \ldots& 1 \\ 1 & 1 & \ldots& 1 \\ \vdots& \vdots& \ddots& \vdots\\ 1 & 1 & \ldots& 1 \end{pmatrix} \right._{{|\alpha_{i}|}\times{|\alpha_{i}|}}, \\& \quad\mbox{with } i=1,2,\ldots,s. \end{aligned}$$

Definition 2.1

[13]

Let \(A=(A(\alpha_{l},\alpha_{m}))_{s\times s} \in C^{n\times n}_{s}\) and \(A(\alpha_{l},\alpha_{l})\) (\(l=1,2,\ldots,s\)) be nonsingular. If

$$ \bigl\| \bigl[A(\alpha_{l},\alpha_{l}) \bigr]^{-1}\bigr\| ^{-1}>\mathop{\sum_{m=1}}_{m\neq l}^{s} \bigl\| A(\alpha_{l},\alpha_{m})\bigr\| ,\quad \forall l \in s, $$
(4)

then A is an I-block strictly diagonally dominant matrix (abbreviated \(\mbox{I-BSD}_{s}\)).

Definition 2.2

[13]

Let \(A=(A(\alpha_{l},\alpha_{m}))_{s\times s} \in C^{n\times n}_{s}\) and \(A(\alpha_{l},\alpha_{l})\) (\(l=1,2,\ldots,s\)) be nonsingular. If

$$ \mathop{\sum_{m=1}}_{m\neq l}^{s} \bigl\| \bigl[A(\alpha_{l},\alpha_{l})\bigr]^{-1}A( \alpha_{l},\alpha_{m})\bigr\| < 1, \quad\forall l \in s, $$
(5)

then A is an II-block strictly diagonally dominant matrix (abbreviated \(\mbox{II-BSD}_{s}\)).

Definition 2.3

[5]

Let \(A=(A(\alpha_{l},\alpha_{m}))_{s\times s} \in C^{n\times n}_{s}\) and \(A(\alpha_{l},\alpha_{l})\) (\(l=1,2,\ldots,s\)) be nonsingular. For \(1\leq i< j\leq s\), if and only if

$$ \bigl\| \bigl[A(\alpha_{i},\alpha_{i}) \bigr]^{-1}\bigr\| ^{-1}\bigl\| \bigl[A(\alpha_{j}, \alpha_{j})\bigr]^{-1}\bigr\| ^{-1}>\mathop{\sum _{m=1}}_{m\neq i}^{s} \bigl\| A( \alpha_{i},\alpha_{m})\bigr\| \mathop{\sum _{m=1}}_{m\neq j}^{s}\bigl\| A(\alpha_{j}, \alpha_{m})\bigr\| , $$
(6)

then A is an I-block strictly doubly diagonally dominant matrix (abbreviated \(\mbox{I-BSDD}_{s}\)).

Definition 2.4

[5]

Let \(A=(A(\alpha_{l},\alpha_{m}))_{s\times s} \in C^{n\times n}_{s}\) and \(A(\alpha_{l},\alpha_{l})\) (\(l=1,2,\ldots,s\)) be nonsingular. For \(1\leq i< j\leq s\), if and only if

$$ \mathop{\sum_{m=1}}_{m\neq i}^{s} \bigl\| \bigl[A(\alpha_{i},\alpha_{i})\bigr]^{-1}A( \alpha_{i},\alpha_{m})\bigr\| \mathop{\sum _{m=1}}_{m\neq j}^{s}\bigl\| \bigl[A( \alpha_{j},\alpha_{j})\bigr]^{-1}A( \alpha_{j},\alpha_{m})\bigr\| < 1, $$
(7)

then A is an II-block strictly doubly diagonally dominant matrix (abbreviated \(\mbox{II-BSDD}_{s}\)).

Lemma 2.1

[5]

Let \(A=(A(\alpha_{l},\alpha_{m}))_{s\times s}\in C^{(n\times n)}_{s}\) be an \(\mbox{II-BSDD}_{s}\). Then \(D^{-1}A\) is an \(\mbox{I-BSDD}_{s}\), where \(D=\operatorname{diag}(A(\alpha_{1},\alpha_{1}),A(\alpha_{2},\alpha_{2}),\ldots,A(\alpha _{s},\alpha_{s}))\).

Remark 2.1

[5]

If A is an \(\mbox{I-BSD}_{s}\) (or \(\mbox{I-BSDD}_{s}\)), according to the following inequality:

$$ \bigl\| A(\alpha_{l},\alpha_{l})A( \alpha_{l},\alpha_{m})\bigr\| \leq\bigl\| A(\alpha_{l}, \alpha_{l})\bigr\| \bigl\| A(\alpha_{l},\alpha_{m})\bigr\| , $$
(8)

then A is an \(\mbox{II-BSD}_{s}\) (or \(\mbox{II-BSDD}_{s}\)).

Definition 2.5

[14, 15]

Let \(A=(A(\alpha_{l},\alpha_{m}))_{s\times s} \in C^{n\times n}_{s}\) and \(A(\alpha_{l},\alpha_{l})\) (\(l=1,2,\ldots,s\)) be nonsingular. If the comparison matrices of block matrix A, \(\mu_{I}(A)=(\omega_{l,m}) \in R^{s\times s}\) or \(\mu_{II}(A)=(\varpi_{l,m})\in R^{s\times s}\) is an M-matrix, respectively, where

$$\begin{aligned}& \omega_{lm}=\left \{ \begin{array}{@{}l@{\quad}l} \|[A(\alpha_{l},\alpha_{l})]^{-1}\|^{-1}, & \mbox{if } l=m,\\ -\|A(\alpha_{l},\alpha_{m})\|, &\mbox{if } l\neq m, \end{array} \right . \\& \varpi_{lm}=\left \{ \begin{array}{@{}l@{\quad}l} 1, &\mbox{if} l=m, \\ -\|[A(\alpha_{l},\alpha_{l})]^{-1}A(\alpha_{l},\alpha_{m})\|,& \mbox{if } l\neq m, \end{array} \right . \end{aligned}$$

then A is called an I-block H-matrix or an II-block H-matrix.

Lemma 2.2

[13]

Let \(A=(A(\alpha_{l},\alpha_{m}))_{s\times s}\in \mbox{I-(II-) block }H\mbox{-matrix}\). Then A is nonsingular.

Remark 2.2

If A is an \(\mbox{I-(II-)BSD}_{s}\) or an \(\mbox{I-(II-)BSDD}_{s}\), by (4)-(7), then \(\mu_{I}(A)\) (or \(\mu_{II}(A)\)) is an M-matrix. Further, by Definition 2.5 and Lemma 2.2, then A is a nonsingular I-(II-) block H-matrix.

Lemma 2.3

[15]

If \(A\in \operatorname{SD}_{n}, \operatorname{SDD}_{n}, \operatorname{SGD}_{n} \textit{ or }\operatorname{SGDD}^{N_{1},N_{2}}_{n}\). Then \(\mu(A)\) is an M-matrix, i.e., A is an H-matrix.

Lemma 2.4

[1]

Let \(A\in C^{n\times n}\). If \(\|A\|<1\), then \(I_{n}-A\) is nonsingular and

$$\bigl\| (I_{n}-A)^{-1}\bigr\| \leq\frac{1}{1-\|A\|}, $$

where \(I_{n}\) denotes the \(n\times n\) identity matrix.

Lemma 2.5

[18]

Let \(A=(A(\alpha_{l},\alpha_{m}))_{s\times s}\in\mbox{I-(II-)BSD}_{s}\). For all \(t=1,2,\ldots,l\), then

$$ \psi_{t}=1-\left \Vert \bigl[A(\alpha_{j_{t}}, \alpha_{j_{t}})\bigr]^{-1}\bigl(A(\alpha _{j_{t}}, \alpha_{i_{1}}),\ldots,A(\alpha_{j_{t}},\alpha_{i_{k}})\bigr) \bigl[A(\alpha )\bigr]^{-1} \begin{pmatrix} A(\alpha_{i_{1}},\alpha_{j_{t}}) \\ \vdots\\ A(\alpha_{i_{k}},\alpha_{j_{t}}) \end{pmatrix}\right \Vert >0. $$

3 On block diagonal-Schur complement of \(\mbox{I-(II-)BSDD}_{s}\)

In this section, to verify the heritable properties of the block diagonal-Schur complements from the original matrix \(\mbox{I-BSDD}_{s}\) and \(\mbox{I-BSDD}_{s}\), we only need to consider two cases as follows:

(1) If A is an \(\mbox{I-BSDD}_{s}\) but is not an \(\mbox{I-BSD}_{s}\), by Definition 2.3, there exists one and only one index \(i_{0}\), \(A(\alpha_{i_{0}},\alpha_{i_{0}})\) being nonsingular and such that

$$ \bigl\| \bigl[A(\alpha_{i_{0}},\alpha_{i_{0}}) \bigr]^{-1}\bigr\| ^{-1}\leq P_{i_{0}}(A). $$
(9)

(2) If A is an \(\mbox{II-BSDD}_{s}\) but not an \(\mbox{II-BSD}_{s}\), by Definition 2.4, there exists one and only one index \(i_{0}\), \(A(\alpha_{i_{0}},\alpha_{i_{0}})\) being nonsingular and satisfying

$$ \sum^{s}_{r=1}\bigl\| \bigl[A( \alpha_{i_{0}},\alpha_{i_{0}})\bigr]^{-1}A( \alpha_{i_{0}},\alpha _{r})\bigr\| \geq 1. $$
(10)

Theorem 3.1

Let A be an \(n\times n\) \(\mbox{I-BSDD}_{s}\) but be not an \(n\times n\) \(\mbox{I-BSD}_{s}\), and \(i_{0}\) (\(1\leq i_{0}\leq s\)) satisfy the condition in (9). For any index set \(\alpha\subseteq N\), writing \(\alpha=\alpha_{i_{1}}\cup\alpha_{i_{2}}\cup\cdots\cup\alpha_{i_{k}}\) and \(\alpha^{c}=\alpha_{j_{1}}\cup\alpha_{j_{2}}\cup\cdots\cup\alpha_{j_{l}}\), with \(k+l=s\), then:

  1. (i)

    If \(\alpha_{i_{0}}\subseteq\alpha\), then \(A/_{\circ}\alpha\in \mbox{I-BSD}_{l}\).

  2. (ii)

    If \(\alpha_{i_{0}}\subseteq\alpha^{c}\), then \(A/_{\circ}\alpha\in \mbox{I-BSDD}_{l}\).

Proof

Without loss of generality, we can assume \(A/_{\circ}\alpha=(\tilde{A}(\alpha_{t},\alpha_{r}))\) and denote

$$\begin{aligned}[b] &\Phi_{\omega}=\bigl(A(\alpha_{j_{\omega}}, \alpha_{i_{1}}),\ldots ,A(\alpha_{j_{\omega}},\alpha_{i_{k}}) \bigr)\bigl[A(\alpha)\bigr]^{-1} \begin{pmatrix} A(\alpha_{i_{1}},\alpha_{j_{\omega}}) \\ \vdots\\ A(\alpha_{i_{k}},\alpha_{j_{\omega}}) \end{pmatrix}, \\ &K_{\omega}=\bigl(\bigl\| A(\alpha_{j_{\omega}},\alpha_{i_{1}})\bigr\| , \ldots,\bigl\| A(\alpha _{j_{\omega}},\alpha_{i_{k}})\bigr\| \bigr),\qquad H_{\upsilon}= \bigl(\bigl\| A(\alpha_{i_{1}},\alpha _{j_{\upsilon}})\bigr\| , \ldots,\bigl\| A( \alpha_{i_{k}},\alpha_{j_{\upsilon}})\bigr\| \bigr)^{T}, \\ &\Psi_{\omega}=K_{\omega}\cdot\mu_{I}\bigl\{ \bigl[A( \alpha)\bigr]^{-1}\bigr\} \cdot H_{\omega}, \qquad\Upsilon_{\omega,\upsilon}=K_{\omega}\cdot\bigl\{ \mu_{I}\bigl[A(\alpha)\bigr]\bigr\} ^{-1}\cdot H_{\upsilon}, \quad \mbox{with } \omega,\upsilon=t,u. \end{aligned} $$

By the definition of the block diagonal-Schur complement (3), denote \(|\alpha_{j_{t}}|=J_{t}\). According to Remark 2.2, we obtain \(\|[A(\alpha_{j_{\omega}},\alpha_{j_{\omega}})]^{-1}\Phi_{\omega}\|<1\), \(\omega=t,u\). Further, consider the following two cases:

(i) If \(\alpha_{i_{0}}\subseteq\alpha\), then

$$\begin{aligned} & \bigl\| \bigl[\tilde{A}(\alpha_{t},\alpha_{t}) \bigr]^{-1}\bigr\| ^{-1} -\mathop{\sum_{r=1}}_{r\neq t}^{l} \bigl\| \tilde{A}(\alpha_{t},\alpha_{r})\bigr\| \\ &\quad=\bigl\Vert \bigl\{ A(\alpha_{j_{t}},\alpha_{j_{t}})- \Phi_{t} \bigr\} ^{-1}\bigr\Vert ^{-1}-\mathop{ \sum_{r=1}}_{r\neq t}^{l}\bigl\| A( \alpha_{j_{t}},\alpha_{j_{r}})\bigr\| \\ &\quad=\bigl\Vert \bigl\{ I_{J_{t}}-\bigl[A(\alpha_{j_{t}}, \alpha_{j_{t}})\bigr]^{-1}\Phi_{t} \bigr\} ^{-1}\bigl[A(\alpha_{j_{t}},\alpha_{j_{t}}) \bigr]^{-1}\bigr\Vert ^{-1} -\mathop{\sum _{r=1}}_{r\neq t}^{l}\bigl\| A(\alpha_{j_{t}}, \alpha_{j_{r}})\bigr\| \\ &\quad\geq\bigl\Vert \bigl\{ I_{J_{t}}-\bigl[A(\alpha_{j_{t}}, \alpha_{j_{t}})\bigr]^{-1}\Phi _{t} \bigr\} ^{-1}\bigr\Vert ^{-1}\bigl\| \bigl[A(\alpha_{j_{t}}, \alpha_{j_{t}})\bigr]^{-1}\bigr\| ^{-1}-\mathop{\sum _{r=1}}_{r\neq t}^{l}\bigl\| A( \alpha_{j_{t}},\alpha_{j_{r}})\bigr\| \\ &\quad\geq \bigl\{ 1-\bigl\Vert \bigl[A(\alpha_{j_{t}}, \alpha_{j_{t}})\bigr]^{-1}\Phi_{t}\bigr\Vert \bigr\} \bigl\| \bigl[A(\alpha_{j_{t}},\alpha_{j_{t}})\bigr]^{-1} \bigr\| ^{-1} -\mathop{\sum_{r=1}}_{r\neq t}^{l} \bigl\| A(\alpha_{j_{t}},\alpha_{j_{r}})\bigr\| \\ &\quad=\bigl\| \bigl[A(\alpha_{j_{t}},\alpha_{j_{t}}) \bigr]^{-1}\bigr\| ^{-1}-\bigl\| \bigl[A(\alpha_{j_{t}},\alpha _{j_{t}})\bigr]^{-1}\bigr\| ^{-1}\bigl\Vert \bigl[A( \alpha_{j_{t}},\alpha_{j_{t}})\bigr]^{-1}\Phi _{t}\bigr\Vert -\mathop{\sum_{r=1}}_{r\neq t}^{l} \bigl\| A(\alpha_{j_{t}},\alpha_{j_{r}})\bigr\| \\ &\quad\geq\bigl\| \bigl[A(\alpha_{j_{t}},\alpha_{j_{t}}) \bigr]^{-1}\bigr\| ^{-1}-\bigl\| \bigl[A(\alpha _{j_{t}}, \alpha_{j_{t}})\bigr]^{-1}\bigr\| ^{-1}\bigl\| \bigl[A( \alpha_{j_{t}},\alpha_{j_{t}})\bigr]^{-1}\bigr\| \Vert \Phi_{t}\Vert -\mathop{\sum_{r=1}}_{r\neq t}^{l} \bigl\| A(\alpha_{j_{t}},\alpha_{j_{r}})\bigr\| \\ &\quad\geq \bigl\| \bigl[A(\alpha_{j_{t}},\alpha_{j_{t}}) \bigr]^{-1}\bigr\| ^{-1}-\mathop{\sum_{r=1}}_{r\neq t}^{l} \bigl\| A(\alpha_{j_{t}},\alpha_{j_{r}})\bigr\| -\Psi_{\omega}\\ &\quad\geq \bigl\| \bigl[A(\alpha_{j_{t}},\alpha_{j_{t}}) \bigr]^{-1}\bigr\| ^{-1}-\mathop{\sum_{r=1}}_{r\neq t}^{l} \bigl\| A(\alpha_{j_{t}},\alpha_{j_{r}})\bigr\| -\Upsilon_{\omega,\omega} \\ &\quad\stackrel{\bigtriangleup}{=}\frac{\operatorname{det}B_{1}}{\operatorname{det}[\mu _{I}(A(\alpha)) ]}, \end{aligned}$$

where

$$B_{1}= \begin{pmatrix} \|[A(\alpha_{j_{t}},\alpha_{j_{t}})]^{-1}\|^{-1}-\mathop{\sum^{l}_{r=1}}\limits_{\hphantom{aai}r\neq t}\| A(\alpha_{j_{t}},\alpha_{j_{r}})\| & -K_{t} \\ -H_{t} & \mu_{I}[A(\alpha)] \end{pmatrix}. $$

Since A is an \(\mbox{I-BSDD}_{s}\), \(\alpha_{i_{0}}\subseteq\alpha\), and \(\forall\alpha_{j_{t}}\subseteq\alpha^{c}\),

$$\bigl\| \bigl[A(\alpha_{j_{t}},\alpha_{j_{t}})\bigr]^{-1} \bigr\| ^{-1}-\mathop{\sum_{r=1}}_{r\neq t}^{l} \bigl\| A(\alpha_{j_{t}},\alpha_{j_{r}})\bigr\| >\sum ^{k}_{r=1}\bigl\| A(\alpha_{j_{t}}, \alpha_{i_{r}})\bigr\| . $$

For \(\forall\alpha_{i_{x}}\subseteq \alpha\), \(x=1,2,\ldots,k\), if \(i_{x}\neq i_{0}\), then

$$ \bigl\| \bigl[A(\alpha_{i_{x}},\alpha_{i_{x}}) \bigr]^{-1}\bigr\| ^{-1}>\mathop{\sum_{r=1}}_{r\neq i_{x}}^{s} \bigl\| A(\alpha_{i_{x}},\alpha_{r})\bigr\| \geq\mathop{\sum _{r=1}}_{r\neq x}^{k}\bigl\| A(\alpha_{i_{x}}, \alpha_{i_{r}})\bigr\| +\bigl\| A(\alpha_{i_{x}},\alpha_{j_{t}})\bigr\| , $$

if \(i_{x}=i_{0}\), by Definition 2.3 and the inequality (9), we have

$$\begin{aligned} & \Biggl\{ \bigl\| \bigl[A(\alpha_{j_{t}},\alpha_{j_{t}}) \bigr]^{-1}\bigr\| ^{-1}-\mathop{\sum_{r=1}}_{r\neq t}^{l} \bigl\| A(\alpha_{j_{t}},\alpha_{j_{r}})\bigr\| \Biggr\} \bigl\| \bigl[A( \alpha_{i_{0}},\alpha _{i_{0}})\bigr]^{-1} \bigr\| ^{-1} \\ &\quad=\bigl\| \bigl[A(\alpha_{j_{t}},\alpha_{j_{t}}) \bigr]^{-1}\bigr\| ^{-1}\bigl\| \bigl[A(\alpha_{i_{0}},\alpha _{i_{0}})\bigr]^{-1}\bigr\| ^{-1}-\bigl\| \bigl[A( \alpha_{i_{0}},\alpha_{i_{0}})\bigr]^{-1}\bigr\| ^{-1} \mathop{\sum_{r=1}}_{r\neq t}^{l} \bigl\| A( \alpha_{j_{t}},\alpha_{j_{r}})\bigr\| \\ &\quad>\mathop{\sum_{r=1}}_{r\neq j_{t}}^{s} \bigl\| A(\alpha_{j_{t}},\alpha_{r})\bigr\| \mathop{\sum _{r=1}}_{r\neq i_{0}}^{s}\bigl\| A(\alpha_{i_{0}}, \alpha_{r})\bigr\| -\mathop{\sum_{r=1}}_{r\neq i_{0}}^{s} \bigl\| A(\alpha_{i_{0}},\alpha_{r})\bigr\| \mathop{\sum _{r=1}}_{r\neq t}^{l}\bigl\| A(\alpha_{j_{t}}, \alpha_{j_{r}})\bigr\| \\ &\quad= \Biggl\{ \mathop{\sum_{r=1}}_{r\neq j_{t}}^{s} \bigl\| A(\alpha_{j_{t}},\alpha_{r})\bigr\| -\mathop{\sum _{r=1}}_{r\neq t}^{l}\bigl\| A(\alpha_{j_{t}}, \alpha_{j_{r}})\bigr\| \Biggr\} \mathop{\sum_{r=1}}_{r\neq i_{0}}^{s} \bigl\| A(\alpha_{i_{0}},\alpha_{r})\bigr\| \\ &\quad\geq\sum^{k}_{r=1}\bigl\| A( \alpha_{j_{t}},\alpha_{i_{r}})\bigr\| \Biggl\{ \mathop{\sum _{r=1}}_{i_{r}\neq i_{0}}^{k}\bigl\| A(\alpha_{i_{0}}, \alpha_{i_{r}})\bigr\| +\bigl\| A(\alpha_{i_{0}},\alpha_{j_{t}})\bigr\| \Biggr\} . \end{aligned}$$

Thus, \(B_{1}\in \operatorname{SDD}_{k+1}\). Further, by Lemma 2.3, we have \(B_{1}=\mu(B_{1})\in M_{k+1}\) and \(\mu_{I}[A(\alpha)]\in M_{k}\). Therefore, \(\operatorname{det}(B_{1})>0\) and \(\operatorname{det}[\mu_{I}(A(\alpha))]>0\), i.e.,

$$A/_{\circ}\alpha\in \mbox{I-BSD}_{l}. $$

(ii) If \(\alpha_{i_{0}}\subseteq\alpha^{c}\), for \(\forall t, u=1,2,\ldots,l\) and \(t\neq u\), we have

$$\begin{aligned} & \bigl\| \bigl[\tilde{A}(\alpha_{t},\alpha_{t}) \bigr]^{-1}\bigr\| ^{-1}\bigl\| \bigl[\tilde{A}(\alpha_{u}, \alpha_{u})\bigr]^{-1}\bigr\| ^{-1}-\mathop{\sum _{r=1}}_{r\neq t}^{l} \bigl\| \tilde{A}( \alpha_{t},\alpha_{r})\bigr\| \mathop{\sum _{r=1}}_{r\neq u}^{l} \bigl\| \tilde{A}( \alpha_{u},\alpha_{r})\bigr\| \\ &\quad=\bigl\Vert \bigl\{ A(\alpha_{j_{t}},\alpha_{j_{t}})- \Phi_{t} \bigr\} ^{-1}\bigr\Vert ^{-1}\bigl\Vert \bigl\{ A(\alpha_{j_{u}},\alpha_{j_{u}})-\Phi_{u} \bigr\} ^{-1}\bigr\Vert ^{-1}\\ &\qquad{}-\mathop{\sum _{r=1}}_{r\neq t}^{l} \bigl\| A(\alpha _{j_{t}},\alpha_{j_{r}})\bigr\| \mathop{\sum _{r=1}}_{r\neq u}^{l} \bigl\| A(\alpha _{j_{u}},\alpha_{j_{r}})\bigr\| \\ &\quad=\bigl\Vert \bigl\{ I_{J_{t}}-\bigl[A(\alpha_{j_{t}}, \alpha_{j_{t}})\bigr]^{-1}\Phi_{t} \bigr\} ^{-1}\bigl[A(\alpha_{j_{t}},\alpha_{j_{t}}) \bigr]^{-1}\bigr\Vert ^{-1} \\ &\qquad{} \times\bigl\Vert \bigl\{ I_{J_{u}}-\bigl[A( \alpha_{j_{u}},\alpha _{j_{u}})\bigr]^{-1} \Phi_{u} \bigr\} ^{-1}\bigl[A(\alpha_{j_{u}},\alpha _{j_{u}})\bigr]^{-1}\bigr\Vert ^{-1}\\ &\qquad{}-\mathop{\sum _{r=1}}_{r\neq t}^{l} \bigl\| A(\alpha _{j_{t}},\alpha_{j_{r}})\bigr\| \mathop{\sum _{r=1}}_{r\neq u}^{l} \bigl\| A(\alpha _{j_{u}},\alpha_{j_{r}})\bigr\| \\ &\quad\geq\bigl\Vert \bigl\{ I_{J_{t}}-\bigl[A(\alpha_{j_{t}}, \alpha_{j_{t}})\bigr]^{-1}\Phi _{t} \bigr\} ^{-1}\bigr\Vert ^{-1}\bigl\Vert \bigl\{ I_{J_{u}}- \bigl[A(\alpha_{j_{u}},\alpha _{j_{u}})\bigr]^{-1} \Phi_{u} \bigr\} ^{-1}\bigr\Vert ^{-1} \\ &\qquad{} \times\bigl\| \bigl[A(\alpha_{j_{t}},\alpha_{j_{t}}) \bigr]^{-1}\bigr\| ^{-1}\bigl\| \bigl[A(\alpha_{j_{u}}, \alpha_{j_{u}})\bigr]^{-1}\bigr\| ^{-1}-\mathop{\sum _{r=1}}_{r\neq t}^{l} \bigl\| A(\alpha_{j_{t}}, \alpha_{j_{r}})\bigr\| \mathop{\sum_{r=1}}_{r\neq u}^{l} \bigl\| A(\alpha_{j_{u}},\alpha_{j_{r}})\bigr\| \\ &\quad\geq \bigl\{ 1-\bigl\| \bigl[A(\alpha_{j_{t}},\alpha_{j_{t}}) \bigr]^{-1}\Phi_{t}\bigr\| \bigr\} \bigl\{ 1-\bigl\| \bigl[A( \alpha_{j_{u}},\alpha_{j_{u}})\bigr]^{-1} \Phi_{u}\bigr\| \bigr\} \\ &\qquad{} \times\bigl\| \bigl[A(\alpha_{j_{t}},\alpha_{j_{t}}) \bigr]^{-1}\bigr\| ^{-1}\bigl\| \bigl[A(\alpha_{j_{u}}, \alpha_{j_{u}})\bigr]^{-1}\bigr\| ^{-1}-\mathop{\sum _{r=1}}_{r\neq t}^{l} \bigl\| A(\alpha_{j_{t}}, \alpha_{j_{r}})\bigr\| \mathop{\sum_{r=1}}_{r\neq u}^{l} \bigl\| A(\alpha_{j_{u}},\alpha_{j_{r}})\bigr\| \\ &\quad\geq \bigl\{ 1-\bigl\| \bigl[A(\alpha_{j_{t}},\alpha_{j_{t}}) \bigr]^{-1}\bigr\| \|\Phi_{t}\| \bigr\} \bigl\{ 1-\bigl\| \bigl[A( \alpha_{j_{u}},\alpha_{j_{u}})\bigr]^{-1}\bigr\| \| \Phi_{u}\| \bigr\} \\ &\qquad{} \times\bigl\| \bigl[A(\alpha_{j_{t}},\alpha_{j_{t}}) \bigr]^{-1}\bigr\| ^{-1}\bigl\| \bigl[A(\alpha_{j_{u}}, \alpha_{j_{u}})\bigr]^{-1}\bigr\| ^{-1}-\mathop{\sum _{r=1}}_{r\neq t}^{l} \bigl\| A(\alpha_{j_{t}}, \alpha_{j_{r}})\bigr\| \mathop{\sum_{r=1}}_{r\neq u}^{l} \bigl\| A(\alpha_{j_{u}},\alpha_{j_{r}})\bigr\| \\ &\quad= \bigl\{ \bigl\| \bigl[A(\alpha_{j_{t}},\alpha_{j_{t}}) \bigr]^{-1}\bigr\| ^{-1}-\|\Phi_{t}\| \bigr\} \bigl\{ \bigl\| \bigl[A(\alpha_{j_{u}},\alpha_{j_{u}})\bigr]^{-1} \bigr\| ^{-1}-\|\Phi_{u}\| \bigr\} \\ &\qquad{}-\mathop{\sum _{r=1}}_{r\neq t}^{l} \bigl\| A(\alpha_{j_{t}}, \alpha_{j_{r}})\bigr\| \mathop{\sum_{r=1}}_{r\neq u}^{l} \bigl\| A(\alpha_{j_{u}},\alpha_{j_{r}})\bigr\| \\ &\quad\geq \bigl\{ \bigl\| \bigl[A(\alpha_{j_{t}},\alpha_{j_{t}}) \bigr]^{-1}\bigr\| ^{-1}-\Psi_{t} \bigr\} \bigl\{ \bigl\| \bigl[A(\alpha_{j_{u}},\alpha_{j_{u}})\bigr]^{-1} \bigr\| ^{-1}-\Psi_{u} \bigr\} \\ &\qquad{}- \mathop{\sum _{r=1}}_{r\neq t}^{l} \bigl\| A(\alpha_{j_{t}}, \alpha_{j_{r}})\bigr\| \mathop{\sum_{r=1}}_{r\neq u}^{l} \bigl\| A(\alpha_{j_{u}},\alpha_{j_{r}})\bigr\| \\ &\quad\geq \bigl\{ \bigl\| \bigl[A(\alpha_{j_{t}},\alpha_{j_{t}}) \bigr]^{-1}\bigr\| ^{-1}-\Upsilon _{tt} \bigr\} \bigl\{ \bigl\| \bigl[A(\alpha_{j_{u}},\alpha_{j_{u}})\bigr]^{-1}\bigr\| ^{-1}-\Upsilon_{uu} \bigr\} \\ &\qquad{}-\mathop{\sum _{r=1}}_{r\neq t}^{l} \bigl\| A(\alpha _{j_{t}},\alpha_{j_{r}})\bigr\| \mathop{\sum _{r=1}}_{r\neq u}^{l} \bigl\| A(\alpha _{j_{u}},\alpha_{j_{r}})\bigr\| \\ &\quad\geq \bigl\{ \bigl\| \bigl[A(\alpha_{j_{t}},\alpha_{j_{t}}) \bigr]^{-1}\bigr\| ^{-1}-\Upsilon _{tt} \bigr\} \bigl\{ \bigl\| \bigl[A(\alpha_{j_{u}},\alpha_{j_{u}})\bigr]^{-1}\bigr\| ^{-1}-\Upsilon_{uu} \bigr\} \\ &\qquad{} - \Biggl\{ \mathop{\sum_{r=1}}_{r\neq t}^{l} \bigl\| A(\alpha _{j_{t}},\alpha_{j_{r}})\bigr\| +\Upsilon_{tu} \Biggr\} \Biggl\{ \mathop{\sum_{r=1}}_{r\neq u}^{l} \bigl\| A(\alpha_{j_{u}},\alpha_{j_{r}})\bigr\| +\Upsilon_{ut} \Biggr\} \\ &\quad\stackrel{\bigtriangleup}{=}\frac{\operatorname{det}B_{2}}{\operatorname{det}[\mu _{I}(A)(\alpha)]}, \end{aligned}$$

where

$$B_{2}= \begin{pmatrix} \|[A(\alpha_{j_{t}},\alpha_{j_{t}})]^{-1}\|^{-1}&-\mathop{\sum^{l}_{r=1}}\limits_{\hphantom{aai}r\neq t} \| A(\alpha_{j_{t}},\alpha_{j_{r}})\| & -K_{t} \\ -\mathop{\sum^{l}_{r=1}}\limits_{\hphantom{aai}r\neq u} \|A(\alpha_{j_{u}},\alpha_{j_{r}})\|&\|[A(\alpha_{j_{u}},\alpha _{j_{u}})]^{-1}\|^{-1}&-K_{u}\\ -H_{t} & -H_{u} & \mu_{I}(A)(\alpha) \end{pmatrix}. $$

Since A is an \(\mbox{I-BSDD}_{s}\), \(\alpha_{i_{0}}\subseteq\alpha^{c}\) and \(\alpha_{j_{\omega}}\subseteq\alpha^{c}\), \(\omega=t,u\),

$$\bigl\| \bigl[A(\alpha_{j_{t}},\alpha_{j_{t}})\bigr]^{-1} \bigr\| ^{-1}\bigl\| \bigl[A(\alpha_{j_{u}},\alpha _{j_{u}}) \bigr]^{-1}\bigr\| ^{-1}>\mathop{\sum_{r=1}}_{r\neq j_{t}}^{s} \bigl\| A(\alpha _{j_{t}},\alpha_{r})\bigr\| \mathop{\sum _{r=1}}_{r\neq j_{u}}^{s} \bigl\| A(\alpha _{j_{u}},\alpha_{r})\bigr\| . $$

For \(\forall\alpha_{i_{x}}\subseteq\alpha\), \(x=1,2,\ldots,k\), and \(\alpha_{j_{w}}\subseteq\alpha^{c}\), \(w=t,u\),

$$\begin{aligned} & \bigl\| \bigl[A(\alpha_{i_{x}},\alpha_{i_{x}}) \bigr]^{-1}\bigr\| ^{-1}\bigl\| \bigl[A(\alpha _{j_{\omega}}, \alpha_{j_{\omega}})\bigr]^{-1}\bigr\| ^{-1} \\ &\quad> \Biggl\{ \mathop{\sum_{r=1}}_{r\neq x}^{k} \bigl\| A(\alpha_{i_{x}},\alpha_{i_{r}})\bigr\| +\bigl\| A(\alpha_{i_{x}}, \alpha_{j_{t}})\bigr\| + \bigl\| A(\alpha_{i_{x}},\alpha_{j_{u}})\bigr\| \Biggr\} \mathop{\sum_{r=1}}_{r\neq j_{\omega}}^{s} \bigl\| A(\alpha_{j_{\omega}},\alpha_{r})\bigr\| \end{aligned}$$

and

$$\begin{aligned} \bigl\| \bigl[A(\alpha_{i_{x}},\alpha_{i_{x}}) \bigr]^{-1}\bigr\| ^{-1}> \Biggl\{ \mathop{\sum _{r=1}}_{r\neq x}^{k} \bigl\| A(\alpha_{i_{x}}, \alpha_{i_{r}})\bigr\| +\bigl\| A(\alpha_{i_{x}},\alpha_{j_{u}})\bigr\| +\bigl\| A( \alpha_{i_{x}},\alpha_{j_{t}})\bigr\| \Biggr\} . \end{aligned}$$

Thus, \(B_{2}\in \operatorname{SDD}_{k+2}\). Further, by Lemma 2.3, we have \(B_{2}=\mu(B_{2})\in M_{k+2}\) and \(\mu_{I}[A(\alpha)]\in M_{k}\). Therefore, \(\operatorname{det}(B_{2})>0\) and \(\operatorname{det}[\mu_{I}(A)(\alpha)]>0\), i.e.,

$$A/_{\circ}\alpha\in \mbox{I-BSDD}_{l}. $$

Combining the proof of (i) and (ii), we complete the proof of Theorem 3.1. □

Theorem 3.2

Let A be an \(n\times n\) \(\mbox{II-BSDD}_{s}\) but be not an \(n\times n\) \(\mbox{II-BSD}_{s}\), and \(i_{0} \) (\(1\leq i_{0}\leq s\)) satisfy the condition in (10). For any index set \(\alpha\subseteq N\), writing \(\alpha=\alpha_{i_{1}}\cup\alpha_{i_{2}}\cup\cdots\cup\alpha_{i_{k}}\) and \(\alpha^{c}=\alpha_{j_{1}}\cup\alpha_{j_{2}}\cup\cdots\cup\alpha_{j_{l}}\), with \(k+l=s\), then:

  1. (i)

    If \(\alpha_{i_{0}}\subseteq\alpha\), then \(A/_{\circ}\alpha\in \mbox{II-BSD}_{l}\),

  2. (ii)

    If \(\alpha_{i_{0}}\subseteq\alpha^{c}\), then \(A/_{\circ}\alpha\in \mbox{II-BSDD}_{l}\).

Proof

Without loss of generality, we can assume \(A/_{\circ}\alpha=(\tilde{A}(\alpha_{t},\alpha_{r}))\), and we denote

$$\begin{aligned}& \Phi_{\omega}=\bigl(A(\alpha_{j_{\omega}}, \alpha_{i_{1}}),\ldots ,A(\alpha_{j_{\omega}},\alpha_{i_{k}}) \bigr)\bigl[A(\alpha)\bigr]^{-1} \begin{pmatrix} A(\alpha_{i_{1}},\alpha_{j_{\omega}}) \\ \vdots\\ A(\alpha_{i_{k}},\alpha_{j_{\omega}}) \end{pmatrix}, \\& K_{\omega}=\bigl(\bigl\| \bigl[A(\alpha_{j_{\omega}},\alpha_{i_{1}}) \bigr]^{-1}A(\alpha_{j_{\omega}},\alpha_{i_{1}})\bigr\| ,\ldots,\bigl\| \bigl[A(\alpha_{j_{\omega}},\alpha _{i_{1}})\bigr]^{-1}A( \alpha_{j_{\omega}},\alpha_{i_{k}})\bigr\| \bigr), \\& H_{\omega}=\bigl(\bigl\| \bigl[A(\alpha_{i_{1}},\alpha_{i_{1}}) \bigr]^{-1}A(\alpha_{i_{1}},\alpha _{j_{\omega}})\bigr\| , \ldots,\bigl\| \bigl[A(\alpha_{i_{1}},\alpha_{i_{1}})\bigr]^{-1}A( \alpha_{i_{k}},\alpha _{j_{\omega}})\bigr\| \bigr)^{T}, \\& D_{1}=\operatorname{diag}\bigl(A(\alpha_{i_{1}}, \alpha_{i_{1}}),\ldots, A(\alpha_{i_{k}},\alpha_{i_{k}}) \bigr), \\& \Psi_{\omega}=K_{\omega}\cdot\mu_{II}\bigl\{ \bigl[A( \alpha)\bigr]^{-1}D_{1}\bigr\} \cdot H_{\omega},\qquad \Upsilon_{\omega,\upsilon}=K_{\omega}\cdot\bigl\{ \mu_{II}\bigl[A( \alpha )\bigr]D^{-1}_{1}\bigr\} ^{-1}\cdot H_{\upsilon}, \\& \quad \mbox{with } \omega,\upsilon =t,u. \end{aligned}$$

(i) If \(\alpha_{i_{0}}\subseteq\alpha\), according to the proof of Theorem 3.1(i), for \(\forall t\in\alpha^{c}\), we obtain

$$\begin{aligned} & 1-\mathop{\sum_{r=1}}_{r\neq t}^{l} \bigl\| \bigl[\tilde{A}(\alpha _{t},\alpha_{t}) \bigr]^{-1}\tilde{A}(\alpha_{t},\alpha_{r})\bigr\| \\ &\quad=1-\mathop{\sum_{r=1}}_{r\neq t}^{l} \bigl\| \bigl[A(\alpha_{j_{t}},\alpha_{j_{t}})-\Phi _{t} \bigr]^{-1}A(\alpha_{j_{t}},\alpha_{j_{r}})\bigr\| \\ &\quad=1-\mathop{\sum_{r=1}}_{r\neq t}^{l} \bigl\| \bigl\{ I_{J_{t}}-\bigl[A(\alpha _{j_{t}},\alpha_{j_{t}}) \bigr]^{-1}\Phi_{t} \bigr\} ^{-1}\bigl[A( \alpha_{j_{t}},\alpha _{j_{t}})\bigr]^{-1}A( \alpha_{j_{t}},\alpha_{j_{r}})\bigr\| \\ &\quad\geq1-\mathop{\sum_{r=1}}_{r\neq t}^{l} \bigl\| \bigl\{ I_{J_{t}}-\bigl[A(\alpha _{j_{t}},\alpha_{j_{t}}) \bigr]^{-1}\Phi_{t} \bigr\} ^{-1}\bigr\| \bigl\| \bigl[A( \alpha_{j_{t}},\alpha _{j_{t}})\bigr]^{-1}A( \alpha_{j_{t}},\alpha_{j_{r}})\bigr\| \\ &\quad\geq1-\mathop{\sum_{r=1}}_{r\neq t}^{l} \bigl\{ 1-\bigl\| \bigl[A(\alpha_{j_{t}},\alpha _{j_{t}}) \bigr]^{-1}\Phi_{t}\bigr\| \bigr\} ^{-1}\bigl\| \bigl[A( \alpha_{j_{t}},\alpha _{j_{t}})\bigr]^{-1}A( \alpha_{j_{t}},\alpha_{j_{r}})\bigr\| \\ &\quad= \bigl\{ 1-\bigl\| \bigl[A(\alpha_{j_{t}},\alpha_{j_{t}}) \bigr]^{-1}\Phi_{t}\bigr\| \bigr\} ^{-1} \\ &\qquad{} \times \Biggl\{ 1-\bigl\| \bigl[A(\alpha_{j_{t}},\alpha _{j_{t}})\bigr]^{-1}\Phi_{t}\bigr\| -\mathop{\sum _{r=1}}_{r\neq t}^{l} \bigl\| \bigl[A(\alpha _{j_{t}},\alpha_{j_{t}})\bigr]^{-1}A( \alpha_{j_{t}},\alpha_{j_{r}})\bigr\| \Biggr\} \\ &\quad\geq \bigl\{ 1-\bigl\| \bigl[A(\alpha_{j_{t}},\alpha_{j_{t}}) \bigr]^{-1}\Phi_{t}\bigr\| \bigr\} ^{-1} \Biggl\{ 1- \Upsilon_{tt}-\mathop{\sum_{r=1}}_{r\neq t}^{l} \bigl\| \bigl[A(\alpha _{j_{t}},\alpha_{j_{t}})\bigr]^{-1}A( \alpha_{j_{t}},\alpha_{j_{r}})\bigr\| \Biggr\} \\ &\quad\stackrel{\bigtriangleup}{=}\frac{\operatorname{det}B_{3}}{ (1-\|[A(\alpha _{j_{t}},\alpha_{j_{t}})]^{-1}\Phi_{t}\| )\operatorname{det}[\mu_{II}(A)(\alpha)]}, \end{aligned}$$

where

$$B_{3}= \begin{pmatrix} 1-\mathop{\sum^{l}_{r=1}}\limits_{\hphantom{aai}r\neq t} \|[A(\alpha_{j_{t}},\alpha _{j_{t}})]^{-1}A(\alpha_{j_{t}},\alpha_{j_{r}})\| & -K_{t} \\ -H_{t} & \mu_{II}(A)(\alpha) \end{pmatrix}. $$

Since A is an \(\mbox{II-BSDD}_{s}\), \(\alpha_{i_{0}}\subseteq\alpha\), and \(\alpha_{j_{t}}\subseteq\alpha^{c}\),

$$1-\mathop{\sum_{r=1}}_{r\neq t}^{l} \bigl\| \bigl[A(\alpha_{j_{t}},\alpha _{j_{t}})\bigr]^{-1}A( \alpha_{j_{t}},\alpha_{j_{r}})\bigr\| >\sum^{k}_{r=1} \bigl\| \bigl[A(\alpha_{j_{t}},\alpha_{j_{t}})\bigr]^{-1}A( \alpha_{j_{t}},\alpha_{i_{r}})\bigr\| . $$

For \(\forall\alpha_{i_{x}}\subseteq\alpha\), \(x=1,2,\ldots,k\), if \(i_{x}\neq i_{0}\), then

$$\begin{aligned} & \bigl\| \bigl[A(\alpha_{i_{x}},\alpha _{i_{x}}) \bigr]^{-1}A(\alpha_{i_{x}},\alpha_{j_{t}})\bigr\| +\mathop{ \sum_{r=1}}_{r\neq x}^{k} \bigl\| \bigl[A( \alpha_{i_{x}},\alpha_{i_{x}})\bigr]^{-1}A( \alpha_{i_{x}},\alpha_{i_{r}})\bigr\| \\ &\quad\leq\mathop{\sum_{r=1}}_{r\neq i_{x}}^{s} \bigl\| \bigl[A(\alpha_{i_{x}},\alpha_{i_{x}})\bigr]^{-1}A( \alpha_{i_{x}},\alpha_{r})\bigr\| < 1, \end{aligned}$$

if \(i_{x}=i_{0}\), by Definition 2.4 and the inequality (10), we have

$$\begin{aligned} & 1-\mathop{\sum_{r=1}}_{r\neq t}^{l} \bigl\| \bigl[A(\alpha_{j_{t}},\alpha_{j_{t}})\bigr]^{-1} A( \alpha_{j_{t}},\alpha_{j_{r}})\bigr\| \\ &\quad\geq1-\mathop{\sum_{r=1}}_{r\neq t}^{l} \bigl\| \bigl[A(\alpha_{j_{t}},\alpha_{j_{t}})\bigr]^{-1} A( \alpha_{j_{t}},\alpha_{j_{r}})\bigr\| \mathop{\sum _{r=1}}_{r\neq i_{0}}^{s} \bigl\| \bigl[A( \alpha_{i_{0}},\alpha_{i_{0}})\bigr]^{-1} A( \alpha_{i_{0}},\alpha_{r})\bigr\| \\ &\quad>\sum^{k}_{r=1}\bigl\| \bigl[A( \alpha_{j_{t}},\alpha_{j_{t}})\bigr]^{-1}A( \alpha_{j_{t}},\alpha _{j_{r}})\bigr\| \mathop{\sum _{r=1}}_{r\neq i_{0}}^{s} \bigl\| \bigl[A( \alpha_{i_{0}},\alpha_{i_{0}})\bigr]^{-1} A( \alpha_{i_{0}},\alpha_{r})\bigr\| \\ &\quad\geq \sum^{k}_{r=1}\bigl\| \bigl[A( \alpha_{j_{t}},\alpha_{j_{t}})\bigr]^{-1}A( \alpha_{j_{t}},\alpha _{j_{r}})\bigr\| \\ &\qquad{} \times \Biggl\{ \bigl\| \bigl[A(\alpha_{i_{0}},\alpha _{i_{0}}) \bigr]^{-1}A(\alpha_{i_{0}},\alpha_{j_{t}})\bigr\| +\mathop{\sum _{r=1}}_{i_{r}\neq i_{0}}^{k} \bigl\| \bigl[A( \alpha_{i_{0}},\alpha_{i_{0}})\bigr]^{-1}A( \alpha_{i_{0}},\alpha_{i_{r}})\bigr\| \Biggr\} . \end{aligned}$$

Thus, \(B_{3}\in \operatorname{SDD}_{k+1}\). Further, by Lemma 2.3, we obtain \(B_{3}=\mu(B_{1})\in M_{k+1}\) and \(\mu_{II}[A(\alpha)]\in M_{k}\). Therefore, \(\operatorname{det}(B_{3})>0\) and \(\operatorname{det}[\mu_{II}(A(\alpha))]>0\), i.e.,

$$A/_{\circ}\alpha\in \mbox{II-BSD}_{l}. $$

(ii) If \(\alpha_{i_{0}}\subseteq\alpha^{c}\), for \(\forall t, u\in \alpha^{c}\), with \(t, u=1,2,\ldots,l\), \(t\neq u\), we obtain

$$\begin{aligned} & 1-\mathop{\sum_{r=1}}_{r\neq t}^{l} \bigl\| \bigl[\tilde{A}(\alpha _{t},\alpha_{t}) \bigr]^{-1}\tilde{A}(\alpha_{t},\alpha_{r})\bigr\| \mathop{\sum_{r=1}}_{r\neq u}^{l} \bigl\| \bigl[\tilde{A}(\alpha_{u},\alpha _{u})\bigr]^{-1} \tilde{A}(\alpha_{u},\alpha_{r})\bigr\| \\ &\quad=1-\mathop{\sum_{r=1}}_{r\neq t}^{l} \bigl\| \bigl[A(\alpha_{j_{t}},\alpha_{j_{t}})-\Phi _{t} \bigr]^{-1}A(\alpha_{j_{t}},\alpha_{j_{r}})\bigr\| \mathop{\sum _{r=1}}_{r\neq u}^{l} \bigl\| \bigl[A( \alpha_{j_{u}},\alpha_{j_{u}})-\Phi_{u} \bigr]^{-1}A(\alpha_{j_{u}},\alpha_{j_{r}})\bigr\| \\ &\quad=1-\mathop{\sum_{r=1}}_{r\neq t}^{l} \bigl\| \bigl\{ I_{J_{t}}-\bigl[A(\alpha _{j_{t}},\alpha_{j_{t}}) \bigr]^{-1}\Phi_{t} \bigr\} ^{-1}\bigl[A( \alpha_{j_{t}},\alpha _{j_{t}})\bigr]^{-1}A( \alpha_{j_{t}},\alpha_{j_{r}})\bigr\| \\ &\qquad{} \times\mathop{\sum_{r=1}}_{r\neq u}^{l} \bigl\| \bigl\{ I_{J_{u}}-\bigl[A(\alpha_{j_{u}},\alpha_{j_{u}}) \bigr]^{-1}\Phi_{u} \bigr\} ^{-1}\bigl[A( \alpha_{j_{u}},\alpha_{j_{u}})\bigr]^{-1}A( \alpha_{j_{u}},\alpha_{j_{r}})\bigr\| \\ &\quad\geq1-\mathop{\sum_{r=1}}_{r\neq t}^{l} \bigl\| \bigl\{ I_{J_{t}}-\bigl[A(\alpha _{j_{t}},\alpha_{j_{t}}) \bigr]^{-1}\Phi_{t} \bigr\} ^{-1}\bigr\| \bigl\| \bigl[A( \alpha_{j_{t}},\alpha _{j_{t}})\bigr]^{-1}A( \alpha_{j_{t}},\alpha_{j_{r}})\bigr\| \\ &\qquad{} \times\mathop{\sum_{r=1}}_{r\neq t}^{l} \bigl\| \bigl\{ I_{J_{u}}-\bigl[A(\alpha_{j_{u}},\alpha_{j_{u}}) \bigr]^{-1}\Phi_{u} \bigr\} ^{-1}\bigr\| \bigl\| \bigl[A( \alpha_{j_{u}},\alpha_{j_{u}})\bigr]^{-1}A( \alpha_{j_{u}},\alpha_{j_{r}})\bigr\| \\ &\quad\geq1-\mathop{\sum_{r=1}}_{r\neq t}^{l} \bigl\{ 1-\bigl\| \bigl[A(\alpha_{j_{t}},\alpha _{j_{t}}) \bigr]^{-1}\Phi_{t}\bigr\| \bigr\} ^{-1}\bigl\| \bigl[A( \alpha_{j_{t}},\alpha _{j_{t}})\bigr]^{-1}A( \alpha_{j_{t}},\alpha_{j_{r}})\bigr\| \\ &\qquad{} \times\mathop{\sum_{r=1}}_{r\neq u}^{l} \bigl\{ 1-\bigl\| \bigl[A(\alpha_{j_{u}},\alpha_{j_{u}}) \bigr]^{-1}\Phi_{u}\bigr\| \bigr\} ^{-1}\bigl\| \bigl[A(\alpha _{j_{u}},\alpha_{j_{u}})\bigr]^{-1}A( \alpha_{j_{u}},\alpha_{j_{r}})\bigr\| \\ &\quad= \bigl\{ 1-\bigl\| \bigl[A(\alpha_{j_{t}},\alpha_{j_{t}}) \bigr]^{-1}\Phi_{t}\bigr\| \bigr\} ^{-1} \bigl\{ 1-\bigl\| \bigl[A(\alpha_{j_{u}},\alpha_{j_{u}})\bigr]^{-1} \Phi_{u}\bigr\| \bigr\} ^{-1} \\ &\qquad{} \times \Biggl\{ \bigl[1-\bigl\| \bigl[A(\alpha_{j_{t}},\alpha _{j_{t}})\bigr]^{-1}\Phi_{t}\bigr\| \bigr] \bigl[1-\bigl\| \bigl[A(\alpha_{j_{u}},\alpha _{j_{u}})\bigr]^{-1} \Phi_{u}\bigr\| \bigr] \\ &\qquad{} -\mathop{\sum_{r=1}}_{r\neq t}^{l} \bigl\| \bigl[A(\alpha _{j_{t}},\alpha_{j_{t}})\bigr]^{-1}A( \alpha_{j_{t}},\alpha_{j_{r}})\bigr\| \mathop{\sum _{r=1}}_{r\neq u}^{l} \bigl\| \bigl[A( \alpha_{j_{u}},\alpha _{j_{u}})\bigr]^{-1}A( \alpha_{j_{u}},\alpha_{j_{r}})\bigr\| \Biggr\} \\ &\quad\geq \bigl\{ 1-\bigl\| \bigl[A(\alpha_{j_{t}},\alpha_{j_{t}}) \bigr]^{-1}\Phi_{t}\bigr\| \bigr\} ^{-1} \bigl\{ 1-\bigl\| \bigl[A(\alpha_{j_{u}},\alpha_{j_{u}})\bigr]^{-1} \Phi_{u}\bigr\| \bigr\} ^{-1} \Biggl\{ (1-\Upsilon_{tt}) (1-\Upsilon_{uu}) \\ & \qquad{}- \biggl[\mathop{\sum_{r=1}}_{r\neq t}^{l} \bigl\| \bigl[A(\alpha _{j_{t}},\alpha_{j_{t}})\bigr]^{-1}A( \alpha_{j_{t}},\alpha_{j_{r}})\bigr\| +\Upsilon _{tu} \biggr] \biggl[\mathop{\sum_{r=1}}_{r\neq u}^{l} \bigl\| \bigl[A(\alpha _{j_{u}},\alpha_{j_{u}})\bigr]^{-1}A( \alpha_{j_{u}},\alpha_{j_{r}})\bigr\| +\Upsilon _{ut} \biggr] \Biggr\} \\ &\quad= \bigl\{ 1-\bigl\| \bigl[A(\alpha_{j_{t}},\alpha_{j_{t}}) \bigr]^{-1}\Phi_{t}\bigr\| \bigr\} ^{-1} \bigl\{ 1-\bigl\| \bigl[A(\alpha_{j_{u}},\alpha_{j_{u}})\bigr]^{-1} \Phi_{u}\bigr\| \bigr\} ^{-1} \\ &\quad\stackrel{\bigtriangleup}{=}\frac{\operatorname{det}B_{4}}{\operatorname{det}[\mu _{II}(A)(\alpha)]}, \end{aligned}$$

where

$$\begin{aligned}& B_{4}= \begin{pmatrix} 1& -\xi_{t}& -K_{t} \\ -\xi_{u}&1&-K_{u}\\ -H_{t} & -H_{u} & \mu_{II}(A)(\alpha) \end{pmatrix}, \qquad \xi_{\omega}=\mathop{\sum _{r=1}}_{r\neq \omega}^{l} \bigl\| \bigl[A( \alpha_{j_{\omega}},\alpha_{j_{\omega}})\bigr]^{-1}A(\alpha _{j_{\omega}},\alpha_{j_{r}})\bigr\| , \\& \quad\mbox{with } \omega=t,u. \end{aligned}$$

Since A is an \(\mbox{II-BSDD}_{s}\), \(\alpha_{i_{0}}\subseteq\alpha^{c}\), and \(\alpha_{j_{\omega}}\subseteq\alpha^{c}\), \(\omega=t,u\),

$$\mathop{\sum_{r=1}}_{r\neq j_{t}}^{s} \bigl\| \bigl[A(\alpha_{j_{t}},\alpha _{j_{t}})\bigr]^{-1}A( \alpha_{j_{t}},\alpha_{r})\bigr\| \mathop{\sum _{r=1}}_{r\neq j_{u}}^{s} \bigl\| \bigl[A( \alpha_{j_{u}},\alpha_{j_{u}})\bigr]^{-1}A(\alpha _{j_{u}},\alpha_{r})\bigr\| < 1. $$

For \(\forall\alpha_{i_{x}}\subseteq\alpha\), \(x=1,2,\ldots,k\), and \(\alpha_{j_{w}}\subseteq\alpha^{c}\), \(w=t,u\).

$$\begin{aligned} & \Biggl\{ \mathop{\sum_{r=1}}_{r\neq x}^{s} \bigl\| \bigl[A(\alpha_{i_{x}},\alpha_{i_{x}})\bigr]^{-1}A( \alpha_{i_{x}},\alpha_{i_{r}})\bigr\| +\bigl\| \bigl[A(\alpha_{i_{x}}, \alpha_{i_{x}})\bigr]^{-1}A(\alpha_{i_{x}}, \alpha_{j_{t}})\bigr\| \\ &\quad{} +\bigl\| \bigl[A(\alpha_{i_{x}},\alpha_{i_{x}}) \bigr]^{-1}A(\alpha _{i_{x}},\alpha_{j_{u}})\bigr\| \Biggr\} \mathop{\sum_{r=1}}_{r\neq j_{\omega}}^{s} \bigl\| \bigl[A(\alpha_{j_{\omega}},\alpha_{j_{\omega}})\bigr]^{-1}A(\alpha _{j_{\omega}},\alpha_{r})\bigr\| < 1. \end{aligned}$$

For \(\forall\alpha_{i_{x}}\subseteq\alpha\), with \(x=1,2,\ldots,k\),

$$\begin{aligned} &\Biggl\{ \mathop{\sum_{r=1}}_{r\neq x}^{k} \bigl\| \bigl[A(\alpha_{i_{x}},\alpha_{i_{x}})\bigr]^{-1} A( \alpha_{i_{x}},\alpha_{i_{r}})\bigr\| +\bigl\| \bigl[A(\alpha_{i_{x}}, \alpha_{i_{x}})\bigr]^{-1}A(\alpha_{i_{x}}, \alpha_{j_{u}})\bigr\| \\ &\quad{}+\bigl\| \bigl[A(\alpha_{i_{x}},\alpha_{i_{x}}) \bigr]^{-1}A(\alpha_{i_{x}},\alpha_{j_{t}})\bigr\| \Biggr\} < 1. \end{aligned}$$

Thus, \(B_{4}\in \operatorname{SDD}_{k+2}\). Further, by Lemma 2.3, we have \(B_{4}=\mu(B_{4})\in M_{k+2}\), \(\mu_{II}[A(\alpha)]\in M_{k}\). Therefore, \(\operatorname{det}(B_{4})>0\) and \(\operatorname{det}[\mu_{II}(A)(\alpha)]>0\), i.e.,

$$A/_{\circ}\alpha\in \mbox{II-BSDD}_{l}. $$

Combining the proof of (i) and (ii), we complete this proof. □

4 Numerical examples

In this section, we use two numerical examples to verify the accuracy of the theoretical analysis, from two aspects of the iteration number (denoted by IT) and the solution time in seconds (denoted by CPU), and then we further illustrate the feasibility and effectiveness of \(\operatorname{PGMSS}(l)\) [19] iteration method, and verify the superiority of PGMSS iteration method is more efficient than that of the ordinary \(\operatorname{GMRES}(l)\) iteration method. Here, we use the integer l in \(\operatorname{GMRES}(l)\) to denote the number of restarting steps.

In our numerical experiments, we choose the zero vector as the initial guess and take the right-hand-side vector b so that the exact solutions x and y are the unity vectors with all entries equal to one. In addition, all runs are initiated with the initial vector \(x^{(0)}=0\). We use

$$\mbox{RES}=\frac{\|b-Ax^{(k)}\|_{2}}{\|b-Ax^{(0)}\|_{2}}< 10^{-8} $$

or the prescribed iteration number \(k_{\max}=n\) as a stopping criterion, where \(x^{(k)}\) is the solution at the kth iterate.

For convenience, without loss of generality, we suppose \(\|\cdot \|=\|\cdot\|_{2}\), \(\alpha=\bigcup^{3}_{r=1}\alpha _{i_{r}}\) and \(\alpha^{c}=\bigcup^{2}_{t=1}\alpha_{j_{t}}\), and we denote the block diagonal-Schur complement \(A/_{\circ} A(\alpha)\) of \(A(\alpha)\) in A by

$$A/_{\circ}A(\alpha)= \begin{pmatrix} \widetilde{A}_{11} & \widetilde{A}_{12} & \widetilde{A}_{13} \\ \widetilde{A}_{21} & \widetilde{A}_{22} & \widetilde{A}_{23} \\ \widetilde{A}_{31} & \widetilde{A}_{32} & \widetilde{A}_{33} \end{pmatrix}. $$

Let us consider the following linear system:

$$Ax= \begin{pmatrix} A(\alpha,\alpha)& A(\alpha,\alpha^{c})\\ A(\alpha^{c},\alpha) & A(\alpha^{c},\alpha^{c}) \end{pmatrix} \begin{pmatrix} x_{1}\\ x_{2} \end{pmatrix}= \begin{pmatrix} b_{1}\\ b_{2} \end{pmatrix}. $$

By solving the above linear system, we compare GMRES iteration method with the block triangular approximate Schur complement preconditioner (11) established in this paper with the ordinary GMRES iteration method. We have

$$ P_{1}= \begin{pmatrix} A(\alpha,\alpha)& \\ A(\alpha^{c},\alpha) & A/_{\circ}A(\alpha) \end{pmatrix}. $$
(11)

In the following, we verify Theorem 3.1 and Theorem 3.2 by Example 4.1 and Example 4.2, respectively.

Example 4.1

Consider a linear equation system \(Ax=b\) whose coefficient matrix A is denoted by

$$A= \begin{pmatrix} A_{11}& A_{12}& A_{13}& A_{14}& A_{15}\\ A_{21}& A_{22}& A_{23}& A_{24}& A_{25}\\ A_{31}& A_{32}& A_{33}& A_{34}& A_{35}\\ A_{41}& A_{42}& A_{43}& A_{44}& A_{45}\\ A_{51}& A_{52}& A_{53}& A_{54}& A_{55} \end{pmatrix}, $$

where the submatrices of the coefficient matrix A take on structures of the forms

$$\begin{aligned}& A_{11}= \left.\begin{pmatrix} 150& -50& & &\\ -50& 150& -50& &\\ & \ddots& \ddots& \ddots&\\ && & -50& 150 \end{pmatrix} \right._{20\times20}, \\& A_{22}= \left.\begin{pmatrix} 1{,}500& -50& & &\\ -50& 1{,}500& -50& &\\ & \ddots& \ddots& \ddots&\\ && & -50& 1{,}500 \end{pmatrix} \right._{20\times20}, \\& A_{33}= \left.\begin{pmatrix} 1{,}500& -50& & &\\ -50& 1{,}500& -50& &\\ & \ddots& \ddots& \ddots&\\ && & -50& 1{,}500 \end{pmatrix} \right._{30\times30}, \\& A_{44}= \left.\begin{pmatrix} 1{,}500& -50& & &\\ -50& 1{,}500& -50& &\\ & \ddots& \ddots& \ddots&\\ && & -50& 1{,}500 \end{pmatrix} \right._{15\times15}, \\& A_{12}= \left.\begin{pmatrix} & & & -30&\\ & & & &\\ & \ddots& \ddots& \ddots&\\ -30&& && \end{pmatrix} \right._{20\times20}, \qquad A_{15}= \left.\begin{pmatrix} & & & -50&\\ & & & &\\ & \ddots& \ddots& \ddots&\\ -50&& && \end{pmatrix} \right._{20\times15}, \\& A_{34}= \left.\begin{pmatrix} & & & -30&\\ & & & &\\ & \ddots& \ddots& \ddots&\\ -30&& && \end{pmatrix} \right._{30\times15}, \qquad A_{13}= \left.\begin{pmatrix} & & & -30&\\ & & & &\\ & \ddots& \ddots& \ddots&\\ -30&& && \end{pmatrix} \right._{20\times30}, \end{aligned}$$

and

$$ A_{45}= \left.\begin{pmatrix} -20& & & -20&\\ & & & &\\ & \ddots& \ddots& \ddots&\\ -20&& &-20& \end{pmatrix} \right._{15\times15}, $$

with \(A_{44}=A_{55}\), \(A^{T}_{21}=A_{12}\), \(A_{14}=A_{15}=A_{24}=A_{25}=A^{T}_{41}=A^{T}_{51}=A^{T}_{42}=A^{T}_{52}\), \(A^{T}_{54}=A_{45}\), \(A^{T}_{43}=A_{34}=A^{T}_{53}=A_{35}\), and \(A^{T}_{31}=A_{13}=A^{T}_{32}=A_{23}\).

We suppose \(t_{i}=\|[A_{ii}]^{-1}\|^{-1}\) and \(s_{i}=\mathop{\sum^{5}_{m=1}}\limits_{\hphantom{aai}m\neq i} \|A_{im}\|\), with \(i=1,2,\ldots,5\). After programming and computation by the use of Matlab software, from Table 1, it is straightforward to show that the matrix A is an \(\mbox{I-BSDD}_{s}\) but is not an \(\mbox{I-BSD}_{s}\).

Table 1 The experimental verification of \(\pmb{\mbox{I-BSDD}_{s}}\)

To verify the heritable properties of the block diagonal-Schur complements from the original matrix \(\mbox{I-BSDD}_{s}\), we need to consider two conditions \(i_{0}\in\alpha\) and \(i_{0}\in\alpha^{c}\), where \(i_{0}\) is defined in (9). From Table 1, it is easy to see \(i_{0}=1\). Firstly, we consider the condition \(i_{0}\in\alpha=\{1,2\}\) and \(\alpha^{c}=\{3,4,5\}\), and assume \(\widetilde{t}_{i}=\|[\widetilde{A}_{ii}]^{-1}\|^{-1}\) and \(\widetilde {s}_{i}=\mathop{\sum^{3}_{m=1}}\limits_{\hphantom{aai}m\neq i} \|\widetilde{A}_{im}\|\), with \(i=1,2,3\), as follows from Table 2, we can easily see that the block diagonal-Schur complement \(A/_{\circ} A(\alpha)\) of \(A(\alpha)\) in A is an \(\mbox{I-BSD}_{s}\); thereby, we verify the conclusion (i) of Theorem 3.1. Secondly, we consider the condition \(i_{0}\in\alpha^{c}=\{1, 2, 3\}\) and \(\alpha=\{4, 5\}\), and assume \(\widehat{t}_{i}=\|[\widetilde{A}_{ii}]^{-1}\|^{-1}\) and \(\widehat {s}_{i}=\mathop{\sum^{3}_{m=1}}\limits_{\hphantom{aai}m\neq i} \|\widetilde{A}_{im}\|\), with \(i=1,2,3\), as follows from Table 3, we can easily see that the block diagonal-Schur complement \(A/_{\circ} A(\alpha)\) of \(A(\alpha)\) in A is an \(\mbox{I-BSDD}_{s}\) but is not an \(\mbox{I-BSD}_{s}\), accordingly, we validate the conclusion (ii) of Theorem 3.1.

Table 2 The experimental verification of \(\pmb{\mbox{I-BSD}_{s}}\)
Table 3 The experimental verification of \(\pmb{\mbox{I-BSD}_{s}}\)

Table 4, for Example 4.1, lists the numerical results corresponding to the tolerance \(\epsilon=10^{-8}\), it means that the block diagonal Schur-based GMRES iteration method with the preconditioner \(P_{1}\) is more efficient than the ordinary \(\operatorname{GMRES}(l)\) iteration method.

Table 4 Number of iterations and solution time in seconds of \(\pmb{\operatorname{PGMRES}(l)}\) iteration method with preconditioner \(\pmb{\mathcal{P}_{1}}\) and the ordinary \(\pmb{\operatorname{GMRES}(l)}\) iteration method, where \(\pmb{\alpha=\{1, 2\}}\)

Example 4.2

Consider a linear equation system \(Ax=b\) whose coefficient matrix A is denoted by

$$ A= \begin{pmatrix} A_{11}& A_{12}& A_{13}& A_{14}& A_{15}\\ A_{21}& A_{22}& A_{23}& A_{24}& A_{25}\\ A_{31}& A_{32}& A_{33}& A_{34}& A_{35}\\ A_{41}& A_{42}& A_{43}& A_{44}& A_{45}\\ A_{51}& A_{52}& A_{53}& A_{54}& A_{55} \end{pmatrix}, $$

where the submatrices of the coefficient matrix A take on structures of the forms

$$\begin{aligned}& A_{11}= \left.\begin{pmatrix} 100& -50& & &\\ -50& 100& -50& &\\ & \ddots& \ddots& \ddots&\\ && & -50& 100 \end{pmatrix} \right._{20\times20}, \\& A_{22}= \left.\begin{pmatrix} 1{,}800& -50& & &\\ -50& 1{,}800& -50& &\\ & \ddots& \ddots& \ddots&\\ && & -50& 1{,}800 \end{pmatrix} \right._{20\times20}, \\& A_{33}= \left.\begin{pmatrix} 1{,}410& -40& & &\\ -40& 1{,}420& -40& &\\ & \ddots& \ddots& \ddots&\\ && & -40& 1{,}700 \end{pmatrix} \right._{30\times30}, \\& A_{44}= \left.\begin{pmatrix} 1{,}800& -50& & &\\ -50& 1{,}800& -50& &\\ & \ddots& \ddots& \ddots&\\ && & -50& 1{,}800 \end{pmatrix} \right._{15\times15}, \\& A_{12}= \left.\begin{pmatrix} & & & -30&\\ & & & &\\ & \ddots& \ddots& \ddots&\\ -30&& && \end{pmatrix} \right._{20\times20}, \qquad A_{15}= \left.\begin{pmatrix} & & & -50&\\ & & & &\\ & \ddots& \ddots& \ddots&\\ -50&& && \end{pmatrix} \right._{20\times15}, \\& A_{34}= \left.\begin{pmatrix} & & & -30&\\ & & & &\\ & \ddots& \ddots& \ddots&\\ -30&& && \end{pmatrix} \right._{30\times15}, \qquad A_{13}= \left.\begin{pmatrix} & & & -30&\\ & & & &\\ & \ddots& \ddots& \ddots&\\ -30&& && \end{pmatrix} \right._{20\times30}, \end{aligned}$$

and

$$ A_{45}= \left.\begin{pmatrix} -20& & & -20&\\ & & & &\\ & \ddots& \ddots& \ddots&\\ -20&& &-20& \end{pmatrix} \right._{15\times15}, $$

with \(A_{44}=A_{55}\), \(A^{T}_{21}=A_{12}\), \(A_{14}=A_{15}=A_{24}=A_{25}=A^{T}_{41}=A^{T}_{51}=A^{T}_{42}=A^{T}_{52}\), \(A^{T}_{54}=A_{45}\), \(A^{T}_{43}=A_{34}=A^{T}_{53}=A_{35}\), and \(A^{T}_{31}=A_{13}=A^{T}_{32}=A_{23}\).

After programming and computation by the use of Matlab software, denoting \(\zeta_{i}=\mathop{\sum^{5}_{m=1}}\limits_{\hphantom{aai}m\neq i} \|[A_{ii}]^{-1}A_{im}\|\), with \(i=1,2,\ldots,5\), from Table 5, it is straightforward to show that the matrix A is an \(\mbox{II-BSDD}_{s}\) but is not an \(\mbox{II-BSD}_{s}\).

Table 5 The experimental verification of \(\pmb{\mbox{I-BSDD}_{s}}\)

To verify the heritable properties of the block diagonal-Schur complements from the original matrix \(\mbox{II-BSDD}_{s}\), we need to consider two conditions \(i_{0}\in\alpha\) and \(i_{0}\in\alpha^{c}\), where \(i_{0}\) is defined in (10). From the first line of Table 5, it is easy to see that \(i_{0}=1\). Firstly, we consider the condition \(i_{0}\in\alpha=\{1,2\}\) and \(\alpha ^{c}=\{3,4,5\}\), and assume \(\widetilde{s}_{i}= \mathop{\sum^{3}_{m=1}}\limits_{\hphantom{aai}m\neq i} \|[\widetilde{A}_{ii}]^{-1}\widetilde{A}_{im}\|\), with \(i=1,2,3\), as follows from Table 6, we can easily see that the block diagonal-Schur complement \(A/_{\circ} A(\alpha)\) of \(A(\alpha)\) in A is an \(\mbox{I-BSD}_{s}\), therefore, we verify the result (i) of Theorem 3.2. Secondly, we consider the condition \(i_{0}\in\alpha^{c}=\{1, 2, 3\}\) and \(\alpha=\{4, 5\}\), and assume \(\widehat{s}_{i}=\mathop{\sum^{3}_{m=1}}\limits_{\hphantom{aai}m\neq i} \|[\widetilde{A}_{ii}]^{-1}\widetilde{A}_{im}\|\), with \(i=1,2,3\), as follows from Table 7, we can easily see that the block diagonal-Schur complement \(A/_{\circ} A(\alpha)\) of \(A(\alpha)\) in A is an \(\mbox{II-BSDD}_{s}\) but is not an \(\mbox{II-BSD}_{s}\), accordingly, we validate the result (ii) of Theorem 3.2.

Table 6 The experimental verification of \(\pmb{\mbox{II-BSDD}_{s}}\)
Table 7 The experimental verification of \(\pmb{\mbox{II-BSDD}_{s}}\)

Table 8, for Example 4.2, lists the numerical results corresponding to the tolerance \(\epsilon=10^{-8}\), it means that the block diagonal Schur-based \(\operatorname{GMRES}(l)\) iteration method with the preconditioner \(P_{1}\) is more efficient than the ordinary \(\operatorname{GMRES}(l)\) iteration method, where \(\alpha=\{1, 2\}\).

Table 8 Number of iterations and solution time in seconds of \(\pmb{\operatorname{PGMRES}(l)}\) iteration method with preconditioner \(\pmb{\mathcal{P}_{1}}\) and \(\pmb{\operatorname{GMRES}(l)}\) iteration method

5 Conclusions

In this paper, the heritable properties of the block diagonal-Schur complements from the original matrix are presented. Numerical experiments further indicate the practical performance.

One of the advantages of the study of the heritable properties is that it is possible to estimate the sharp bounds of the eigenvalues of the original matrix and the corresponding block diagonal-Schur complements. Thereby we believe that the techniques presented here can be used to develop robust and efficient Schur complement preconditioning techniques for solving linear systems.

References

  1. Berman, A, Plemmons, RJ: Nonnegative Matrices in the Mathematical Sciences. Academic Press, New York (1979)

    MATH  Google Scholar 

  2. Calson, D, Markham, T: Schur complements on diagonally dominant matrices. Czechoslov. Math. J. 29, 246-251 (1979)

    Google Scholar 

  3. Liu, JZ, Huang, YQ: Some properties on Schur complements of H-matrices and diagonally dominant matrices. Linear Algebra Appl. 389, 365-380 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  4. Golub, GH, Van Loan, CF: Matrix Computations, 3rd edn. Johns Hopkins University Press, Baltimore (1996)

    MATH  Google Scholar 

  5. Kress, R: Numerical Analysis. Springer, New York (1998)

    Book  MATH  Google Scholar 

  6. Song, YZ: The convergence of block AOR iterative methods. Appl. Math. 6, 39-45 (1993)

    Google Scholar 

  7. Petra, CG, Anitescu, M: A preconditioning technique for Schur complement systems arising in stochastic optimization. Comput. Optim. Appl. 52, 315-344 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  8. Malas, T, Gürel, L: Schur complement preconditioners for surface integral-equation formulations of dielectric problems solved with the multilevel fast multipole algorithm. SIAM J. Sci. Comput. 33, 2440-2467 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  9. Yamazaki, I, Ng, EG: Preconditioning Schur complement systems of highly-indefinite linear systems for a parallel hybrid solver. Numer. Math., Theory Methods Appl. 3, 352-366 (2010)

    MATH  MathSciNet  Google Scholar 

  10. Puntanen, S, Styan, PH: Schur complements in statistics and probability. In: The Schur Complement and Its Applications. Numerical Methods and Algorithms, vol. 4, pp. 163-226 (2005)

    Chapter  Google Scholar 

  11. Ouellette, DV: Schur complements and statistics. Linear Algebra Appl. 36, 187-295 (1981)

    Article  MATH  MathSciNet  Google Scholar 

  12. Zhang, FZ: The Schur Complement and Its Applications. Springer, New York (2005)

    Book  MATH  Google Scholar 

  13. Kolotilina, LY: Nonsingularity/singularity criteria for nonstrictly block diagonally dominant matrices. Linear Algebra Appl. 359, 133-159 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  14. You, ZY, Jiang, ZQ: The diagonal dominance of block matrices. J. Xi’an Jiaotong Univ. 18, 123-125 (1984)

    Google Scholar 

  15. Robert, F: Block H-matrices et convergence des methods iterations classiques par blocs. Linear Algebra Appl. 2, 223-265 (1969)

    Article  MATH  Google Scholar 

  16. Polman, B: Incomplete blockwise factorizations of (block) H-matrices. Linear Algebra Appl. 90, 119-132 (1987)

    Article  MATH  MathSciNet  Google Scholar 

  17. Zhang, CY, Li, YT, Chen, F: On Schur complement of block diagonally dominant matrices. Linear Algebra Appl. 414, 533-546 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  18. Feingold, DG, Varga, RS: Block diagonally dominant matrices and generalizations of the Gerschgorin circle theorem. Pac. J. Math. 12, 1241-1250 (1962)

    Article  MATH  MathSciNet  Google Scholar 

  19. Saad, Y, Schultz, MH: GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Stat. Comput. 7, 856-869 (1986)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

The author gratefully acknowledges the valuable comments and suggestions from the anonymous referees. This research was supported by the Project Foundation of Chongqing Municipal Education Committee (KJ120832).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhuo-Hong Huang.

Additional information

Competing interests

The author declares that he has no competing interests.

Rights and permissions

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Huang, ZH. On block diagonal-Schur complements of the block strictly doubly diagonally dominant matrices. J Inequal Appl 2015, 80 (2015). https://doi.org/10.1186/s13660-015-0597-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-015-0597-4

Keywords