Open Access

Several new inequalities for the minimum eigenvalue of M-matrices

Journal of Inequalities and Applications20162016:119

DOI: 10.1186/s13660-016-1062-8

Received: 12 January 2016

Accepted: 6 April 2016

Published: 14 April 2016

Abstract

Several convergent sequences of the lower bounds for the minimum eigenvalue of M-matrices are given. It is proved that these sequences are monotone increasing and improve some existing results. Finally, numerical examples are given to show that these sequences are better than some known results and could reach the true value of the minimum eigenvalue in some cases.

Keywords

M-matrix nonnegative matrix Hadamard product spectral radius minimum eigenvalue

MSC

15A06 15A15 15A48

1 Introduction

For a positive integer n, N denotes the set \(\{1, 2, \ldots, n\}\), and \(\mathbb{R}^{n\times n}(\mathbb{C}^{n\times n})\) denotes the set of all \({n\times n}\) real (complex) matrices throughout. For \(A=[a_{ij}]\in\mathbb{R}^{n\times n}\), we write \(A\geq0\) (\(A>0\)) if all \(a_{ij}\geq0\) (\(a_{ij}>0\)), \(i,j\in N\). If \(A\geq0\) (\(A>0\)), we say A is nonnegative (positive, respectively).

Let \(Z_{n}\) denote the class of all \(n\times n\) real matrices all of whose off-diagonal entries are nonpositive. A matrix A is called a nonsingular M-matrix if \(A\in Z_{n}\) and the inverse of A, denoted by \(A^{-1}\), is nonnegative. Denote by \(M_{n}\) the set of all \(n\times n\) nonsingular M-matrices (see [1]). If A is a nonsingular M-matrix, then there exists a positive eigenvalue of A equal to \(\tau(A)=\rho(A^{-1})^{-1}\), where \(\rho(A^{-1})\) is the Perron eigenvalue of the nonnegative matrix \(A^{-1}\). It is easy to prove that \(\tau(A)=\min\{|\lambda|:\lambda\in\sigma(A)\}\), where \(\sigma(A)\) denotes the spectrum of A. \(\tau(A)\) is called the minimum eigenvalue of A (see [2]). The Perron-Frobenius theorem tells us that \(\tau(A)\) is an eigenvalue of A corresponding to a nonnegative eigenvector \(x=[x_{1}, x_{2},\ldots,x_{n}]^{T}\). If, in addition, A is irreducible, then \(\tau(A)\) is simple and \(x>0\) (see [1]). If G is the diagonal matrix of an M-matrix A, then the spectral radius of the Jacobi iterative matrix \(J_{A}=G^{-1}(G-A)\) of A, denoted by \(\rho(J_{A})\), is less than 1 (see [1]).

A matrix A is called reducible if there exists a nonempty proper subset \(I\subset N\) such that \(a_{ij}=0\), \(\forall i \in I\), \(\forall j\notin I\). If A is not reducible, then we call A irreducible (see [1]).

For two real matrices \(A=[a_{ij}]\) and \(B=[b_{ij}]\) of the same size, the Hadamard product of A and B is defined as the matrix \(A\circ B=[a_{ij}b_{ij}]\). If \(A\in M_{n}\) and \(B \geq0\), then it is clear that \(B\circ A^{-1}\geq0\) (see [2]).

For convenience, we employ the following notations throughout. Let \(A=[a_{ij}]\in M_{n}\) with \(a_{ii}\neq0\) for all \(i\in N\), and \(A^{-1}=[\alpha_{ij}]\). For \(i,j,k\in N\), \(j\neq i\), denote
$$\begin{aligned}& R_{i}(A)=\sum_{j=1}^{n}{a_{ij}}, \qquad M_{1}=\max_{i\in N}\sum _{j=1}^{n}{\alpha_{ij}},\qquad M_{2}=\min_{i\in N}\sum_{j=1}^{n}{ \alpha _{ij}};\qquad \sigma_{i}=\frac{\sum_{j\neq i}{|a_{ij}|}}{|a_{ii}|}, \\& \sigma =\max_{i\in N}\sigma_{i},\qquad \varphi_{i}= \frac{1}{a_{ii}-\sum_{k\neq i}|a_{ik}|\sigma_{k}};\qquad r_{i}=\max_{j\neq{i}} \biggl\{ \frac{|a_{ji}|}{|a_{jj}|-\sum_{k \neq j, i}|a_{jk}|} \biggr\} , \\& m_{ji}=\frac{|a_{ji}|+\sum_{k\neq{j,i}}|a_{jk}|r_{i}}{|a_{jj}|},\qquad h_{i}=\max _{j\neq{i}} \biggl\{ \frac{|a_{ji}|}{|a_{jj}|m_{ji}-\sum_{k\neq{j,i}}|a_{jk}|m_{ki}} \biggr\} , \\& u_{ji}=\frac{|a_{ji}|+\sum_{k\neq{j,i}}|a_{jk}|m_{ki}h_{i}}{|a_{jj}|},\qquad u_{i}=\max _{j\neq{i}}\{u_{ij}\}. \end{aligned}$$

Recall that \(A=[a_{ij}]\in\mathbb{C}^{n\times n}\) is called row diagonally dominant if \(\sigma_{i}\leq1\) for all \(i\in N\). If \(\sigma _{i}<1\), we say that A is strictly row diagonally dominant. It is well known that a strictly row diagonally dominant matrix is nonsingular. A is called weakly chained diagonally dominant if \(\sigma_{i}\leq1\), \(J(A)=\{i\in N: \sigma_{i}<1 \} \neq\varnothing\) and for all \(i\in N/J(A)\), there exist indices \(i_{1},i_{2},\ldots,i_{k}\) in N with \(a_{i_{l}i_{l+1}}\neq0\), \(0\leq l\leq k-1\), where \(i_{0}=i\) and \(i_{k}\in J(A)\). Notice that a strictly diagonally dominant matrix is also weakly chained diagonally dominant (see [3]).

Estimating the bounds for the minimum eigenvalue of M-matrices is an interesting subject in matrix theory, it has important applications in many practical problems (see [3]), and various refined bounds can be found in [39]. Hence, it is necessary to estimate the bounds for \(\tau(A)\).

In [3], Shivakumar et al. obtained the following bounds for \(\tau(A)\): Let \(A=[a_{ij}]\in\mathbb{R}^{n\times n}\) be a weakly chained diagonally dominant M-matrix, \(A^{-1}=[\alpha _{ij}]\). Then
$$ \min_{i\in N}R_{i}(A)\leq\tau(A)\leq\max _{i\in N}R_{i}(A), \qquad \tau(A)\leq\min _{i\in N}{a_{ii}} \quad \text{and} \quad \frac {1}{M_{1}}\leq\tau(A)\leq\frac{1}{M_{2}}. $$
(1)
Subsequently, Tian and Huang [4] provided a lower bound for \(\tau(A)\) using the spectral radius of the Jacobi iterative matrix \(J_{A}\) of A: Let \(A=[a_{ij}]\in M_{n}\) and \(A^{-1}=[\alpha_{ij}]\). Then
$$ \tau(A)\geq\frac{1}{[1+(n-1)\rho(J_{A})]\max_{i\in N}{\alpha_{ii}}}. $$
(2)
Furthermore, when A is a strictly diagonally dominant M-matrix, they presented a lower bound for \(\tau(A)\) which depends only on the entries of A: If \(A=[a_{ij}]\in M_{n}\) is strictly row diagonally dominant, then
$$ \tau(A)\geq\frac{1}{[1+(n-1)\sigma]\max_{i\in N}{\varphi_{i}}}. $$
(3)
In 2013, Li et al. [5] improved (2) and (3), and they gave the following result: Let \(A=[a_{ij}]\in M_{n}\) and \(A^{-1}=[\alpha_{ij}]\). Then
$$ \tau(A)\geq\frac{2}{\max_{i\neq j} \{\alpha_{ii}+\alpha _{jj}+[(\alpha_{ii}-\alpha_{jj})^{2}+4(n-1)^{2}\alpha_{ii}\alpha_{jj}\rho ^{2}(J_{A})]^{\frac{1}{2}} \}}. $$
(4)
Furthermore, when A is a strictly diagonally dominant M-matrix, they also presented a lower bound for \(\tau(A)\) which depends only on the entries of A: If \(A=[a_{ij}]\in M_{n}\) is strictly row diagonally dominant, then
$$ \tau(A)\geq\frac{2}{\max_{i\neq j} \{\varphi_{i}+\varphi _{j}+[\varphi_{ij}^{2}+4(n-1)^{2}\varphi_{i}\varphi_{j}\sigma^{2}]^{\frac {1}{2}} \}}, $$
(5)
where \(\varphi_{ij}=\max\{\varphi_{i},\varphi_{j}\}-\min\{ a_{ii}^{-1},a_{jj}^{-1}\}\).
In 2015, Wang and Sun [6] presented the following result: Let \(A=[a_{ij}]\in M_{n}\) and \(A^{-1}=[\alpha_{ij}]\). Then
$$ \tau(A)\geq\frac{2}{\max_{i\neq j} \{\alpha_{ii}+\alpha _{jj}+[(\alpha_{ii}-\alpha_{jj})^{2}+4(n-1)^{2}\alpha_{ii}\alpha _{jj}u_{i}u_{j}]^{\frac{1}{2}} \}}. $$
(6)
And they gave examples to show that (6) is better than (2) and (4).

In this paper, we continue to research the problems mentioned above and give some convergent sequences for the lower bounds of the minimum eigenvalue of M-matrices which improve (1)-(6). Finally, numerical examples are given to verify the theoretical results.

2 Main results

In this section, we present our main results. First of all, we give some notations and lemmas. Let \(B\geq0\), \(D=\operatorname{diag}(b_{ii})\) and \(D_{1}=\operatorname {diag}(d_{ii})\), where \(d_{ii}=1\) if \(b_{ii}=0\); \(d_{ii}=b_{ii}\) if \(b_{ii}\neq0\). Denote \(\mathcal{J}_{B}=D_{1}^{-1}(B-D)\), then \(\rho (\mathcal{J}_{B^{T}})=\rho(\mathcal{J}_{B})\) (see [6]).

Let \(A=[a_{ij}]\in\mathbb{R}^{n\times n}\), \(a_{ii}\neq0\), \(i\in N\). For \(i,j,k\in N\), \(j\neq i\), \(t=1,2,\ldots\) , denote
$$\begin{aligned}& u^{(0)}_{ji}=u_{ji},\qquad p_{ji}^{(t)}= \frac{|a_{ji}|+\sum_{k\neq {j,i}}|a_{jk}|u_{ki}^{(t-1)}}{|a_{jj}|},\qquad p^{(t)}_{i}=\max _{j\neq{i}}\bigl\{ p^{(t)}_{ij}\bigr\} , \\& h^{(t)}_{i}=\max_{j\neq{i}} \biggl\{ \frac {|a_{ji}|}{|a_{jj}|p^{(t)}_{ji}-\sum_{k\neq {j,i}}|a_{jk}|p^{(t)}_{ki}} \biggr\} ,\qquad u^{(t)}_{ji}= \frac{|a_{ji}|+\sum_{k\neq {j,i}}|a_{jk}|p^{(t)}_{ki}h^{(t)}_{i}}{|a_{jj}|}. \end{aligned}$$

Similar to the proof of Lemma 1, Lemma 2, and Lemma 3 in [7], we can obtain the following lemma.

Lemma 1

If \(A=[a_{ij}]\in M_{n}\) is strictly row diagonally dominant, then \(A^{-1}=[\alpha_{ij}]\) exists, and for all \(i,j\in{N}\), \(j\neq{i}\), \(t=1,2,\ldots\) ,
  1. (a)

    \(1>r_{i}\geq m_{ji}\geq u_{ji}=u^{(0)}_{ji}\geq {p^{(1)}_{ji}}\geq{u^{(1)}_{ji}}\geq{p^{(2)}_{ji}}\geq {u^{(2)}_{ji}}\geq\cdots\geq{p^{(t)}_{ji}}\geq{u^{(t)}_{ji}}\geq\ldots \geq0\);

     
  2. (b)

    \(1\geq h_{i}\geq0\), \(1\geq h_{i}^{(t)}\geq0\);

     
  3. (c)

    \(\alpha_{ji}\leq p_{ji}^{(t)}\alpha_{ii}\);

     
  4. (d)

    \(\frac{1}{a_{ii}}\leq\alpha_{ii}\leq\frac{1}{a_{ii}-\sum_{j\neq i}|a_{ij}|p_{ji}^{(t)}}=\phi_{i}^{(t)}\).

     

Lemma 2

[7]

If \(A^{-1}\) is a doubly stochastic matrix, then \(Ae=e\), \(A^{T}e=e\), where \(e=[1,1,\ldots,1]^{T}\).

Lemma 3

[2]

Let \(A,B\in\mathbb{R}^{n\times n}\), and let \(X,Y\in\mathbb{R}^{n\times n}\) be diagonal matrices. Then
$$X(A\circ B)Y=(XAY)\circ B =(XA)\circ(BY)=(AY)\circ(XB)=A\circ(XBY). $$

Lemma 4

[2]

Let \(A=[a_{ij}]\in\mathbb{C}^{n\times n}\) and \(x_{1}, x_{2}, \ldots, x_{n}\) be positive real numbers. Then all the eigenvalues of A lie in the region
$$\bigcup_{i,j\in N,i\neq j} \biggl\{ z\in\mathbb{C} :|z-a_{ii}||z-a_{jj}| \leq \biggl(x_{i}\sum _{k \neq i} \frac{1}{x_{k}}|a_{ki}| \biggr) \biggl(x_{j}\sum_{k \neq j} \frac{1}{x_{k}}|a_{kj}| \biggr) \biggr\} . $$

Theorem 1

Let \(A=[a_{ij}]\in M_{n}\), \(n\geq2\), \(B=[b_{ij}]\geq0\), and \(A^{-1}=[\alpha_{ij}]\). Then, for \(t=1,2,\ldots\) ,
$$\begin{aligned} \rho\bigl(B\circ A^{-1}\bigr) \leq&\frac{1}{2}\max _{i\neq j} \bigl\{ b_{ii}\alpha_{ii}+b_{jj} \alpha_{jj}+ \bigl[(b_{ii}\alpha_{ii}-b_{jj} \alpha _{jj})^{2}+4p_{i}^{(t)}p_{j}^{(t)} \alpha_{ii}\alpha_{jj}d_{ii}d_{jj}\rho ^{2}(\mathcal{J}_{B}) \bigr]^{\frac{1}{2}} \bigr\} \\ =& \Omega_{t}. \end{aligned}$$
(7)

Proof

Since A is an M-matrix, there exists a positive diagonal matrix X, such that \(X^{-1}AX\) is a strictly row diagonally dominant M-matrix (see [2]), and
$$\rho\bigl(B\circ A^{-1}\bigr)=\rho\bigl(X^{-1}\bigl(B\circ A^{-1}\bigr)X\bigr)=\rho\bigl(B\circ\bigl(X^{-1}AX \bigr)^{-1}\bigr). $$
Hence, for convenience and without loss of generality, we assume that A is a strictly diagonally dominant matrix.
(a) First, we assume that A and B are irreducible matrices. Since B is nonnegative and irreducible, and so is \(\mathcal{J}_{B^{T}}\). Then there exists a positive vector \(x=(x_{i})\) such that \(\mathcal {J}_{B^{T}}x=\rho(\mathcal{J}_{B^{T}})x=\rho(\mathcal{J}_{B})x\), thus, we obtain \(\sum_{k\neq i}b_{ki}x_{k}=\rho(\mathcal{J}_{B})d_{ii}x_{i}\) and \(\sum_{k\neq j}b_{kj}x_{k}=\rho(\mathcal{J}_{B})d_{jj}x_{j}\), \(i,j\in N\). Let \(X=\operatorname{diag}(x_{1},x_{2},\ldots, x_{n})\), then
$$ \widehat{B}=[\hat{b}_{ij}]=XBX^{-1}= \left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} b_{11}&\frac{b_{12}x_{1}}{x_{2}}&\cdots&\frac{b_{1n}x_{1}}{x_{n}}\\ \frac{b_{21}x_{2}}{x_{1}}&b_{22}&\cdots&\frac{b_{2n}x_{2}}{x_{n}}\\ \vdots&\vdots&\ddots&\vdots\\ \frac{b_{n1}x_{n}}{x_{1}}&\frac{b_{n2}x_{n}}{x_{2}}&\cdots&b_{nn} \end{array}\displaystyle \right ]. $$
From Lemma 3, we have \(\widehat{B}\circ A^{-1}=(XBX^{-1})\circ A^{-1}=X(B\circ A^{-1})X^{-1}\). Thus, \(\rho(\widehat{B}\circ A^{-1})=\rho(B\circ A^{-1})\). Let \(\lambda=\rho(\widehat{B}\circ A^{-1})\), then \(\lambda\geq b_{ii}\alpha_{ii}\), \(\forall i\in N\). By Lemma 4, there are \(i,j\in N\), \(i\neq j\) such that
$$ |\lambda-b_{ii}\alpha_{ii}||\lambda-b_{jj} \alpha_{jj}|\leq \biggl(p_{i}^{(t)}\sum _{k\neq i}\frac{1}{p_{k}^{(t)}}\hat{b}_{ki} \alpha_{ki} \biggr) \biggl(p_{j}^{(t)}\sum _{k\neq j}\frac{1}{p_{k}^{(t)}}\hat{b}_{kj} \alpha_{kj} \biggr). $$
Note that
$$\begin{aligned} p_{i}^{(t)}\sum_{k\neq i} \frac{1}{p_{k}^{(t)}}\hat{b}_{ki}\alpha_{ki} \leq& p_{i}^{(t)}\sum_{k\neq i} \frac{1}{p_{k}^{(t)}}\hat{b}_{ki}p_{ki}^{(t)} \alpha_{ii} \leq p_{i}^{(t)}\sum _{k\neq i}\frac{1}{p_{k}^{(t)}}\hat{b}_{ki}p_{k}^{(t)} \alpha_{ii} \\ =&p_{i}^{(t)}\alpha_{ii}\sum _{k\neq i}\hat{b}_{ki} =p_{i}^{(t)} \alpha_{ii}\sum_{k\neq i}\frac{b_{ki}x_{k}}{x_{i}} =p_{i}^{(t)}\alpha_{ii}d_{ii}\rho( \mathcal{J}_{B}). \end{aligned}$$
Similarly, we have \(p_{j}^{(t)}\sum_{k\neq j}\frac{1}{p_{k}^{(t)}}\hat{b}_{kj}\alpha_{kj}=p_{j}^{(t)}\alpha _{jj}d_{jj}\rho(\mathcal{J}_{B})\). Hence, we obtain
$$ (\lambda-b_{ii}\alpha_{ii}) ( \lambda-b_{jj}\alpha_{jj})\leq p_{i}^{(t)}p_{j}^{(t)} \alpha_{ii}\alpha_{jj}d_{ii}d_{jj} \rho^{2}(\mathcal{J}_{B}). $$
(8)
From (8), we have
$$ \lambda\leq\frac{1}{2} \bigl\{ b_{ii}\alpha_{ii}+b_{jj} \alpha_{jj}+ \bigl[(b_{ii}\alpha_{ii}-b_{jj} \alpha_{jj})^{2}+4p_{i}^{(t)}p_{j}^{(t)} \alpha _{ii}\alpha_{jj}d_{ii}d_{jj} \rho^{2}(\mathcal{J}_{B}) \bigr]^{\frac {1}{2}} \bigr\} , $$
that is,
$$\begin{aligned} \rho\bigl(B\circ A^{-1}\bigr) \leq& \frac{1}{2} \bigl\{ b_{ii}\alpha _{ii}+b_{jj}\alpha_{jj}+ \bigl[(b_{ii}\alpha_{ii}-b_{jj}\alpha _{jj})^{2}+4p_{i}^{(t)}p_{j}^{(t)} \alpha_{ii}\alpha_{jj}d_{ii}d_{jj} \rho ^{2}(\mathcal{J}_{B}) \bigr]^{\frac{1}{2}} \bigr\} \\ \leq& \frac{1}{2}\max_{i\neq j} \bigl\{ b_{ii} \alpha_{ii}+b_{jj}\alpha_{jj}+ \bigl[(b_{ii} \alpha _{ii}-b_{jj}\alpha_{jj})^{2}+4p_{i}^{(t)}p_{j}^{(t)} \alpha_{ii}\alpha _{jj}d_{ii}d_{jj} \rho^{2}(\mathcal{J}_{B}) \bigr]^{\frac{1}{2}} \bigr\} . \end{aligned}$$

(b) Now, assume that one of A and B is reducible. It is well known that a matrix in \(Z_{n}\) is a nonsingular M-matrix if and only if all its leading principal minors are positive (see condition (E17) of Theorem 6.2.3 of [1]). If we denote by \(C=[c_{ij}]\) the \(n\times n\) permutation matrix with \(c_{12}=c_{23}=\cdots=c_{n-1,n}=c_{n1}=1\), the remaining \(c_{ij}\) zero, then both \(A-{\varepsilon}C\) and \(B+{\varepsilon}C\) are irreducible matrices for any chosen positive real number ε, sufficiently small such that all the leading principal minors of both \(A-{\varepsilon} C\) and \(B+{\varepsilon}C\) are positive. Now we substitute \(A-{\varepsilon} C\) and \(B+{\varepsilon}C\) for A and B, in the previous case, and then letting \({\varepsilon}\rightarrow0\), the result follows by continuity. □

Theorem 2

The sequence \(\{\Omega_{t}\}\), \(t=1,2,\ldots\) obtained from Theorem  1 is monotone decreasing with a lower bound \(\rho(B\circ A^{-1})\) and, consequently, is convergent.

Proof

By Lemma 1, we have \(1>p^{(t)}_{ji}\geq p^{(t+1)}_{ji}\geq 0\), \(j,i\in N\), \(j \neq i\), \(t=1,2,\ldots\) . Then, by the definition of \(p^{(t)}_{i}\), it is easy to see that the sequence \(\{p^{(t)}_{i}\}\) is monotone decreasing, and so is \(\{\Omega_{t}\}\). Hence, the sequence \(\{\Omega_{t}\}\) is convergent. □

Theorem 3

Let \(A=[a_{ij}]\in M_{n}\) and \(A^{-1}=[\alpha_{ij}]\). Then, for \(t=1,2,\ldots\) ,
$$ \tau(A)\geq\frac{2}{\max_{i\neq j} \{\alpha_{ii}+\alpha _{jj}+ [(\alpha_{ii}-\alpha_{jj})^{2}+4(n-1)^{2}p_{i}^{(t)}p_{j}^{(t)}\alpha _{ii}\alpha_{jj} ]^{\frac{1}{2}} \}}=\Upsilon_{t}. $$
(9)

Proof

Let all entries of B in (7) be 1. Then \(b_{ii}=1\), \(\forall i\in N\), \(\rho(\mathcal{J}_{B})=n-1\). Therefore, by (7), we have
$$ \tau(A)=\frac{1}{\rho(A^{-1})}\geq\frac{2}{\max_{i\neq j} \{ \alpha_{ii}+\alpha_{jj}+ [(\alpha_{ii}-\alpha _{jj})^{2}+4(n-1)^{2}p_{i}^{(t)}p_{j}^{(t)}\alpha_{ii}\alpha_{jj} ]^{\frac{1}{2}} \}}. $$
The proof is completed. □

Similar to the proof of Theorem 2, we can obtain the following theorem.

Theorem 4

The sequence \(\{\Upsilon_{t}\}\), \(t=1,2,\ldots\) obtained from Theorem  3 is monotone increasing with an upper bound \(\tau(A)\) and, consequently, is convergent.

Remark 1

We next give a simple comparison between (6) and (9). According to Lemma 1, we know that for all \(i,j\in {N}\), \(j\neq{i}\), \(t=1,2,\ldots\) , \(1>u_{ji}\geq{p^{(t)}_{ji}}\geq0\). Furthermore, by the definitions of \(u_{i}\), \(p^{(t)}_{i}\), we have \(1>u_{i}\geq{p^{(t)}_{i}}\geq0\). Obviously, for \(t=1,2,\ldots\) , the bounds in (9) are bigger than the bound in (6).

Next, we give lower bounds for \(\tau(A)\) which depend only on the entries of A when A is a strictly diagonally dominant M-matrix.

Corollary 1

If \(A=[a_{ij}]\in M_{n}\) is strictly diagonally dominant, then for \(t=1,2,\ldots\) ,
$$ \tau(A)\geq\frac{2}{\max_{i\neq j} \{\phi_{i}^{(t)}+\phi _{j}^{(t)}+ [(\phi_{ij}^{(t)})^{2}+4(n-1)^{2}p_{i}^{(t)}p_{j}^{(t)}\phi _{i}^{(t)}\phi_{j}^{(t)} ]^{\frac{1}{2}} \}}=\Gamma_{t}, $$
(10)
where \(\phi_{ij}^{(t)}=\max\{\phi_{i}^{(t)},\phi_{j}^{(t)}\}-\min\{ a_{ii}^{-1},a_{jj}^{-1}\}\).

Proof

Let \(A^{-1}=[\alpha_{ij}]\). Since \(A\in M_{n}\) is strictly diagonally dominant, by Lemma 1, we have
$$ a_{ii}^{-1}\leq\alpha_{ii}\leq \phi_{i}^{(t)},\quad i\in N, $$
(11)
from which we get
$$ (\alpha_{ii}-\alpha_{jj})^{2}\leq \bigl(\max\bigl\{ \phi_{i}^{(t)},\phi_{j}^{(t)} \bigr\} -\min\bigl\{ a_{ii}^{-1},a_{jj}^{-1} \bigr\} \bigr)^{2}=\bigl(\phi_{ij}^{(t)} \bigr)^{2}. $$
(12)
From inequalities (9), (11), and (12), the conclusion follows. □

Corollary 2

The sequence \(\{\Gamma_{t}\}\), \(t=1,2,\ldots\) obtained from Corollary  1 is monotone increasing with an upper bound \(\tau(A)\) and, consequently, is convergent.

Theorem 5

Let \(A=[a_{ij}]\in M_{n}\) with \(a_{11}=a_{22}=\cdots=a_{nn}\), and suppose \(A^{-1}=[\alpha_{ij}]\) is doubly stochastic. Then, for \(t=1,2,\ldots\) ,
$$\begin{aligned} \Upsilon_{t} \geq&\frac{2}{\max_{i\neq j} \{\alpha_{ii}+\alpha _{jj}+[(\alpha_{ii}-\alpha_{jj})^{2}+4(n-1)^{2}\alpha_{ii}\alpha_{jj}\rho (J_{A})^{2}]^{\frac{1}{2}} \}} \\ \geq& \frac{1}{[1+(n-1)\rho(J_{A})]\max_{i\in N}{\alpha_{ii}}} \end{aligned}$$
(13)
and
$$ \Gamma_{t}\geq\frac{2}{\max_{i\neq j} \{\varphi_{i}+\varphi _{j}+[\varphi_{ij}^{2}+4(n-1)^{2}\varphi_{i}\varphi_{j}\sigma^{2}]^{\frac {1}{2}} \}}. $$
(14)

Proof

Since \(A^{-1}\) is doubly stochastic, by Lemma 2, we have \(|a_{ii}|=\sum_{k\neq i}|a_{ik}|+1=\sum_{k\neq i}|a_{ki}|+1\). Then for every \(i\in N\), \(r_{i}=\max_{l\neq i} \{\frac{|a_{li}|}{|a_{ll}|-\sum_{k\neq l,i}|a_{lk}|} \} =\max_{l\neq i} \{\frac{|a_{li}|}{1+|a_{li}|} \} =\frac{\max_{l\neq i}|a_{li}|}{1+\max_{l\neq i}|a_{li}|}\). Since \(f(x)=\frac{x}{1+x}\) is an increasing function on \((0,+\propto)\), we have
$$r_{i}=\frac{\max_{l\neq i}|a_{li}|}{1+\max_{l\neq i}|a_{li}|} \leq\frac{\sum_{k\neq i}|a_{ki}|}{1+\sum_{k\neq i}|a_{ki}|} = \frac{\sum_{k\neq i}|a_{ki}|}{|a_{ii}|}=1-\frac {1}{|a_{ii}|},\quad i\in N. $$
Furthermore, note that
$$J_{A}=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} 0&-\frac{a_{12}}{a_{11}}&\cdots&-\frac{a_{1n}}{a_{11}}\\ -\frac{a_{21}}{a_{22}}&0&\cdots&-\frac{a_{2n}}{a_{22}}\\ \vdots&\vdots&\ddots&\vdots\\ -\frac{a_{n1}}{a_{nn}}&-\frac{a_{n2}}{a_{nn}}&\cdots&0 \end{array}\displaystyle \right ] $$
is a nonnegative matrix and \(\frac{\sum_{k\neq i}|a_{ik}|}{|a_{ii}|}=1-\frac {1}{|a_{ii}|}\), \(i\in N\). Hence, by the Perron-Frobenius theorem [1], we have \(\rho (J_{A})=1-\frac{1}{|a_{ii}|}\), \(i\in N\).
Combining with Lemma 1, we have, for all \(i,j\in {N}\), \(j\neq{i}\), \(t=1,2,\ldots\) , \(1>\rho(J_{A})\geq r_{i}\geq u_{ji}\geq{p^{(t)}_{ji}}\geq0\). By the definitions of \(u_{i}\), \(p^{(t)}_{i}\), we have
$$1>\rho(J_{A})\geq r_{i}\geq u_{i} \geq{p^{(t)}_{i}}\geq0,\quad i\in N. $$
Obviously, by inequalities (4), (9), and Theorem 4.2 of [5], the inequality (13) holds. The inequality (14) is proved similarly. □

3 Numerical examples

In this section, several numerical examples are given to verify the theoretical results.

Example 1

Let
$$A = \left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} 27&-2&-4&-1&-3&-3&-4&-5&-1&-3\\ -2&34&-13&-2&-4&-2&-5&0&-3&-2\\ -3&-5&34&-6&-4&-3&-5&-2&-3&-2\\ 0&-3&-4&38&-13&-4&-1&-4&-3&-5\\ -3&-3&-1&-11&41&-9&-2&-3&-4&-4\\ -3&-5&-2&-3&-6&35&-1&-5&-5&-4\\ -5&-2&0&-5&0&-7&34&-8&-1&-5\\ -1&-4&-3&-2&-5&-1&-9&32&-1&-5\\ -4&-4&-2&-4&-4&-3&-2&-1&33&-8\\ -5&-5&-4&-3&-1&-2&-4&-3&-11&37.9 \end{array}\displaystyle \right ]. $$
It is easy to verify that A is a nonsingular M-matrix, but it is not weakly chained diagonally dominant. Hence inequality (1) cannot be used to estimate the lower bounds of \(\tau(A)\). Numerical results obtained from Theorem 3.1 of [4], Theorem 4.1 of [5], Theorem 4 of [6], and Theorem 3, i.e., inequalities (2), (4), (6), and (9), respectively, are given in Table 1 for the total number of iterations \(T=10\). In fact, \(\tau(A)=0.88732567\).
Table 1

The lower upper of \(\pmb{\tau(A)}\)

Method

t

\(\boldsymbol{\Upsilon_{t}}\)

Theorem 3.1 of [4]

 

0.71954029

Theorem 4 of [6]

 

0.72233354

Theorem 4.1 of [5]

 

0.72599653

Theorem 3

t = 1

0.73796896

t = 2

0.78701144

t = 3

0.81231875

t = 4

0.82309382

t = 5

0.82885000

t = 6

0.83191772

t = 7

0.83355094

t = 8

0.83442012

t = 9

0.83488269

t = 10

0.83512891

Example 2

Let
$$A = \left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} 41&-12&-1&-5&-3&-3&-4&-4&-3&-3\\ -9&42&-15&-2&0&-4&0&-3&-4&-4\\ -1&-5&43&-13&-3&-3&-5&-4&-4&-4\\ -3&-5&-6&36&-9&-4&-3&-1&0&-4\\ -4&-3&-5&-2&34&-10&-2&-1&-4&-2\\ -3&-1&-4&-2&-1&37&-15&-5&-2&-3\\ -5&-2&-2&-2&-4&-2&35&-8&-5&-4\\ -5&-5&-1&-4&-5&-3&0&33&-6&-3\\ -5&-3&-4&-3&-3&-2&-2&-3&37&-11\\ -3&-5&-4&-2&-5&-5&-3&-3&-8&38.1 \end{array}\displaystyle \right ]. $$
Since \(A\in Z_{n}\) is strictly row diagonally dominant, it is easy to see that A is a nonsingular M-matrix. Numerical results obtained from Theorem 4.1 of [3], Corollary 3.4 of [4], Corollary 4.4 of [5], and Corollary 1, i.e., inequalities (1), (3), (5), and (10), respectively, are given in Table 2 for the total number of iterations \(T=10\). In fact, \(\tau(A)=1.09872077\).
Table 2

The lower upper of \(\pmb{\tau(A)}\)

Method

t

\(\boldsymbol{\Gamma_{t}}\)

Theorem 4.1 of [3]

 

0.10000000

Corollary 3.4 of [4]

 

0.12651607

Corollary 4.4 of [5]

 

0.15589448

Corollary 1

t = 1

0.62192050

t = 2

0.80351392

t = 3

0.90177936

t = 4

0.95648966

t = 5

0.98380481

t = 6

0.99943436

t = 7

1.00847717

t = 8

1.01247467

t = 9

1.01419855

t = 10

1.01473510

Remark 2

Numerical results in Table 1 and Table 2 show that:
  1. (a)

    Lower bounds obtained from Theorem 3 and Corollary 1 are bigger than these corresponding bounds in [36].

     
  2. (b)

    These sequences obtained from Theorem 3 and Corollary 1 are monotone increasing.

     
  3. (c)

    These sequences obtained from Theorem 3 and Corollary 1 approximates effectively the true value of \(\tau(A)\).

     

Example 3

Let \(A=[a_{ij}]\in\mathbb{R}^{10\times10}\), where \(a_{ii}=10\), \(i\in N\); \(a_{ij}=-1\), \(i,j\in N\), \(i\neq j\). It is easy to know that A is a nonsingular M-matrix and \(A^{-1}\) is doubly stochastic. By Theorem 3 for \(T=10\), we have \(\tau (A)\geq1\) when \(t=1\). In fact, \(\tau(A)=1\).

Remark 3

Numerical results in Example 3 show that the lower bounds obtained from Theorem 3 could reach the true value of \(\tau(A)\) in some cases.

4 Further work

In this paper, we present several convergent sequences to approximate \(\tau(A)\). Then an interesting problem is how accurately these bounds can be computed. At present, it is very difficult for the authors to give the error analysis. We will continue to study this problem in the future.

Declarations

Acknowledgements

This work is supported by the National Natural Science Foundation of China (Nos. 11361074, 11501141), Foundation of Guizhou Science and Technology Department (Grant No. [2015]2073), Scientific Research Foundation for the introduction of talents of Guizhou Minzu University (No. 15XRY003), and Scientific Research Foundation of Guizhou Minzu University (No. 15XJS009).

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
College of Science, Guizhou Minzu University

References

  1. Berman, A, Plemmons, RJ: Nonnegative Matrices in the Mathematical Sciences. SIAM, Philadelphia (1994) View ArticleMATHGoogle Scholar
  2. Horn, RA, Johnson, CR: Topics in Matrix Analysis. Cambridge University Press, Cambridge (1991) View ArticleMATHGoogle Scholar
  3. Shivakumar, PN, Williams, JJ, Ye, Q, Marinov, CA: On two-sided bounds related to weakly diagonally dominant M-matrices with application to digital circuit dynamics. SIAM J. Matrix Anal. Appl. 17, 298-312 (1996) MathSciNetView ArticleMATHGoogle Scholar
  4. Tian, GX, Huang, TZ: Inequalities for the minimum eigenvalue of M-matrices. Electron. J. Linear Algebra 20, 291-302 (2010) MathSciNetMATHGoogle Scholar
  5. Li, CQ, Li, YT, Zhao, RJ: New inequalities for the minimum eigenvalue of M-matrices. Linear Multilinear Algebra 61(9), 1267-1279 (2013) MathSciNetView ArticleMATHGoogle Scholar
  6. Wang, F, Sun, DF: Some new inequalities for the minimum eigenvalue of M-matrices. J. Inequal. Appl. 2015, 195 (2015) MathSciNetView ArticleMATHGoogle Scholar
  7. Zhao, JX, Wang, F, Sang, CL: Some inequalities for the minimum eigenvalue of the Hadamard product of an M-matrix and an inverse M-matrix. J. Inequal. Appl. 2015, 92 (2015) MathSciNetView ArticleMATHGoogle Scholar
  8. Zhou, DM, Chen, GL, Wu, GX, Zhang, XY: On some new bounds for eigenvalues of the Hadamard product and the Fan product of matrices. Linear Algebra Appl. 438, 1415-1426 (2013) MathSciNetView ArticleMATHGoogle Scholar
  9. Zhou, DM, Chen, GL, Wu, GX, Zhang, XY: Some inequalities for the Hadamard product of an M-matrix and an inverse M-matrix. J. Inequal. Appl. 2013, 16 (2013) MathSciNetView ArticleMATHGoogle Scholar

Copyright

© Zhao and Sang 2016