# New inequalities for the Hadamard product of an M-matrix and its inverse

## Abstract

For the Hadamard product $$A\circ A^{-1}$$ of an M-matrix A and its inverse $$A^{-1}$$, some new inequalities for the minimum eigenvalue of $$A\circ A^{-1}$$ are derived. Numerical example is given to show that the inequalities are better than some known results.

## 1 Introduction

The set of all $$n\times n$$ real matrices is denoted by $$\mathbb{R}^{n\times n}$$, and $$\mathbb{C}^{n\times n}$$ denotes the set of all $$n\times n$$ complex matrices.

A matrix $$A=(a_{ij})\in\mathbb{R}^{n\times n}$$ is called an M-matrix [1] if there exists a nonnegative matrix B and a nonnegative real number Î» such that

\begin{aligned} A=\lambda I-B, \quad\lambda\geq\rho(B), \end{aligned}

where I is an identity matrix, $$\rho(B)$$ is a spectral radius of the matrix B. If $$\lambda=\rho(B)$$, then A is a singular M-matrix; if $$\lambda>\rho(B)$$, then A is called a nonsingular M-matrix. Denote by $$M_{n}$$ the set of all $$n\times n$$ nonsingular M-matrices. Let us denote

\begin{aligned} \tau(A)=\min\bigl\{ \operatorname{Re}(\lambda):\lambda\in\sigma(A)\bigr\} , \end{aligned}

and $$\sigma(A)$$ denotes the spectrum of A. It is known that [2] $$\tau(A)=\frac{1}{\rho(A^{-1})}$$ is a positive real eigenvalue of $$A\in M_{n}$$.

The Hadamard product of two matrices $$A=(a_{ij})$$ and $$B=(b_{ij})$$ is the matrix $$A\circ B=(a_{ij}b_{ij})$$. If A and B are M-matrices, then it is proved in [3] that $$A\circ B^{-1}$$ is also an M-matrix.

A matrix A is irreducible if there does not exist any permutation matrix P such that

\begin{aligned} PAP^{T}= \begin{bmatrix} A_{11}&A_{12}\\ 0 &A_{22} \end{bmatrix}, \end{aligned}

where $$A_{11}$$ and $$A_{22}$$ are square matrices.

For convenience, for any positive integer n, N denotes the set $$\{1,2,\ldots,n\}$$. Let $$A=(a_{ij})\in\mathbb{R}^{n\times n}$$ be a strictly diagonally dominant by row, for any $$i\in N$$, denote

\begin{aligned}& R_{i}=\sum _{k\neq i} |a_{ik}|,\qquad C_{i}=\sum _{k\neq i} |a_{ki}|,\qquad d_{i}=\frac{R_{i}}{|a_{ii}|},\qquad c_{i}=\frac{C_{i}}{|a_{ii}|},\quad i\in N; \\& s_{ji}=\frac{|a_{ji}|+\sum _{k\neq j,i}|a_{jk}|d_{k}}{|a_{jj}|},\quad j\neq i,j\in N; \qquad s_{i}=\max _{j\neq i} \{s_{ij}\},\quad i\in N; \\& m_{ji}=\frac{|a_{ji}|+\sum _{k\neq j,i}|a_{jk}|s_{ki}}{|a_{jj}|},\quad j\neq i,j\in N;\qquad m_{i}=\max _{j\neq i} \{m_{ij}\},\quad i\in N. \end{aligned}

Recently, some lower bounds for the minimum eigenvalue of the Hadamard product of an M-matrix and its inverse have been proposed. Let $$A\in M_{n}$$, it was proved in [4] that

\begin{aligned} 0< \tau\bigl(A\circ A^{-1}\bigr)\leq1. \end{aligned}

Subsequently, Fiedler and Markham [3] gave a lower bound on $$\tau(A\circ A^{-1})$$,

\begin{aligned} \tau\bigl(A\circ A^{-1}\bigr)\geq\frac{1}{n}, \end{aligned}

and conjectured that

\begin{aligned} \tau\bigl(A\circ A^{-1}\bigr)\geq\frac{2}{n}. \end{aligned}

Chen [5], Song [6] and Yong [7] have independently proved this conjecture.

In [8], Li et al. gave the following result:

\begin{aligned} \tau\bigl(A\circ A^{-1}\bigr)\geq\min _{i} \biggl\{ \frac {a_{ii}-s_{i}R_{i}}{1+\sum _{j\neq i}s_{ji}} \biggr\} . \end{aligned}

Furthermore, if $$a_{11}=a_{22}=\cdots=a_{nn}$$, they have obtained

\begin{aligned} \min_{i} \biggl\{ \frac{a_{ii}-s_{i}R_{i}}{1+\sum _{j\neq i}s_{ji}} \biggr\} \geq\frac{2}{n}. \end{aligned}

In this paper, we present some new lower bounds for $$\tau(A\circ A^{-1})$$. These bounds improve the results in [8â€“11].

## 2 Preliminaries and notations

In this section, we give some lemmas that involve inequalities for the entries of $$A^{-1}$$. They will be useful in the following proofs.

### Lemma 2.1

[7]

If $$A=(a_{ij})\in\mathbb{R}^{n\times n}$$ is a strictly row diagonally dominant matrix, that is,

\begin{aligned} |a_{ii}|>\sum _{j\neq i}|a_{ij}|,\quad i\in N, \end{aligned}

then $$A^{-1}=(b_{ij})$$ exists, and

\begin{aligned} |b_{ji}|\leq\frac{\sum _{k\neq j}|a_{jk}|}{|a_{jj}|}|b_{ii}|,\quad j\neq i. \end{aligned}

### Lemma 2.2

Let $$A=(a_{ij})\in\mathbb{R}^{n\times n}$$ be a strictly diagonally dominant M-matrix by row. Then, for $$A^{-1}=(b_{ij})$$, we have

\begin{aligned} b_{ji}\leq\frac{|a_{ji}|+\sum _{k\neq j,i}|a_{jk}|s_{ki}}{a_{jj}}b_{ii}\leq m_{j}b_{ii},\quad j\neq i, i\in N. \end{aligned}

### Proof

For $$i\in N$$, let

\begin{aligned} d_{k}(\varepsilon)=\frac{\sum _{l\neq k}|a_{kl}|+\varepsilon}{a_{kk}}, \end{aligned}

and

\begin{aligned} s_{ji}(\varepsilon)=\frac{|a_{ji}|+(\sum _{k\neq j,i}|a_{jk}|+\varepsilon)d_{k}(\varepsilon)}{|a_{jj}|},\quad j\neq i. \end{aligned}

Since A is strictly diagonally dominant, then $$0< d_{k}<1$$ and $$0< s_{ji}<1$$. Therefore, there exists $$\varepsilon>0$$ such that $$0< d_{k}(\varepsilon)<1$$ and $$0< s_{ji}(\varepsilon)<1$$. For any $$i\in N$$, let

\begin{aligned} S_{i}(\varepsilon)=\operatorname{diag} \bigl(s_{1i}(\varepsilon),\ldots ,s_{i-1,i}(\varepsilon), 1,s_{i+1,i}(\varepsilon), \ldots,s_{ni}(\varepsilon) \bigr). \end{aligned}

Obviously, the matrix $$AS_{i}(\varepsilon)$$ is also a strictly diagonally dominant M-matrix by row. Therefore, by LemmaÂ 2.1, we derive the following inequality:

\begin{aligned} \frac{b_{ji}}{s_{ji}(\varepsilon)}\leq \frac{|a_{ji}|+\sum _{k\neq j,i}|a_{jk}|s_{ki}(\varepsilon)}{s_{ji}(\varepsilon)a_{jj}}b_{ii},\quad j\neq i, j\in N, \end{aligned}

i.e.,

\begin{aligned} b_{ji}\leq\frac{|a_{ji}|+\sum _{k\neq j,i}|a_{jk}|s_{ki}(\varepsilon)}{a_{jj}}b_{ii},\quad j\neq i, j\in N. \end{aligned}

Let $$\varepsilon\longrightarrow0$$ to obtain

\begin{aligned} |b_{ji}|\leq\frac{|a_{ji}|+\sum _{k\neq j,i}|a_{jk}|s_{ki}}{a_{jj}}b_{ii}\leq m_{j}b_{ii},\quad j\neq i, i\in N. \end{aligned}

This proof is completed.â€ƒâ–¡

### Lemma 2.3

Let $$A=(a_{ij})\in\mathbb{R}^{n\times n}$$ be a strictly row diagonally dominant M-matrix. Then, for $$A^{-1}=(b_{ij})$$, we have

\begin{aligned} \frac{1}{a_{ii}-\sum _{j\neq i}|a_{ij}|m_{ji}} \geq b_{ii}\geq\frac{1}{a_{ii}},\quad i\in N. \end{aligned}

### Proof

Let $$B=A^{-1}$$. Since A is an M-matrix, then $$B\geq0$$. By $$AB=I$$, we have

\begin{aligned} 1=\sum _{j=1}^{n}a_{ij}b_{ji}=a_{ii}b_{ii}- \sum _{j\neq i}|a_{ij}|b_{ji},\quad i\in N. \end{aligned}

Hence

\begin{aligned} a_{ii}b_{ii}\geq1,\quad i\in N, \end{aligned}

that is,

\begin{aligned} b_{ii}\geq\frac{1}{a_{ii}},\quad i\in N. \end{aligned}

By LemmaÂ 2.2, we have

\begin{aligned} 1 =&a_{ii}b_{ii}-\sum _{j\neq i}|a_{ij}|b_{ji}\\ \geq& a_{ii}b_{ii}-\sum _{j\neq i}|a_{ij}| \frac{|a_{ji}|+\sum _{k\neq j,i}|a_{jk}|s_{ki}}{a_{jj}}b_{ii}\\ =& \biggl(a_{ii}-\sum _{j\neq i}|a_{ij}|m_{ji} \biggr)b_{ii}, \end{aligned}

i.e.,

\begin{aligned} \frac{1}{a_{ii}-\sum _{j\neq i}|a_{ij}|m_{ji}}\geq b_{ii},\quad i\in N. \end{aligned}

Thus the proof is completed.â€ƒâ–¡

### Lemma 2.4

[12]

If $$A^{-1}$$ is a doubly stochastic matrix, then $$Ae=e$$, $$A^{T}e=e$$, where $$e=(1,1,\ldots,1)^{T}$$.

### Lemma 2.5

[13]

Let $$A=(a_{ij})\in\mathbb{C}^{n\times n}$$ and $$x_{1},x_{2},\ldots,x_{n}$$ be positive real numbers. Then all the eigenvalues of A lie in the region

\begin{aligned} \mathop{\bigcup_{i,j=1}}_{i\neq j}^{n} \biggl\{ z\in C:|z-a_{ii}||z-a_{jj}| \leq \biggl({x_{i}}\sum _{k\neq i}\frac{1}{x_{k}}|a_{ki}| \biggr) \biggl({x_{j}}\sum _{k\neq j}\frac{1}{x_{k}}|a_{kj}| \biggr) \biggr\} . \end{aligned}

### Lemma 2.6

[3]

If P is an irreducible M-matrix, and $$Pz\geq kz$$ for a nonnegative nonzero vector z, then $$\tau(P)\geq k$$.

## 3 Main results

In this section, we give two new lower bounds for $$\tau(A\circ A^{-1})$$ which improve some previous results.

### Theorem 3.1

Let $$A=(a_{ij})\in\mathbb{R}^{n\times n}$$ be an M-matrix, and suppose that $$A^{-1}=(b_{ij})$$ is doubly stochastic. Then

\begin{aligned} b_{ii}\geq\frac{1}{1+\sum _{j\neq i}m_{ji}},\quad i\in N. \end{aligned}

### Proof

Since $$A^{-1}$$ is doubly stochastic and A is an M-matrix, by LemmaÂ 2.4, we have

\begin{aligned} a_{ii}=\sum _{k\neq i}|a_{ik}|+1=\sum _{k\neq i}|a_{ki}|+1,\quad i\in N, \end{aligned}

and

\begin{aligned} b_{ii}+\sum _{j\neq i}b_{ji}=1,\quad i\in N. \end{aligned}

The matrix A is strictly diagonally dominant by row. Then, by LemmaÂ 2.2, for $$i\in N$$, we have

\begin{aligned} 1 =&b_{ii}+\sum _{j\neq i}b_{ji}\leq b_{ii}+\sum _{j\neq i}\frac{|a_{ji}|+\sum _{k\neq j,i}|a_{jk}|s_{ki}}{a_{jj}}b_{ii} \\ =& \biggl(1+\sum _{j\neq i}\frac{|a_{ji}|+\sum _{k\neq j,i}|a_{jk}|s_{ki}}{a_{jj}} \biggr)b_{ii} \\ =& \biggl(1+\sum _{j\neq i}m_{ji} \biggr)b_{ii}, \end{aligned}

i.e.,

\begin{aligned} b_{ii}\geq\frac{1}{1+\sum _{j\neq i}m_{ji}},\quad i\in N. \end{aligned}

This proof is completed.â€ƒâ–¡

### Theorem 3.2

Let $$A=(a_{ij})\in\mathbb{R}^{n\times n}$$ be an M-matrix, and let $$A^{-1}=(b_{ij})$$ be doubly stochastic. Then

\begin{aligned} \tau\bigl(A\circ A^{-1}\bigr) \geq&\min _{i\neq j}\frac{1}{2} \biggl\{ a_{ii}b_{ii}+a_{jj}b_{jj}- \biggl[ (a_{ii}b_{ii}-a_{jj}b_{jj} )^{2} \\ &{}+4 \biggl(m_{i}\sum _{k\neq i}|a_{ki}|b_{ii} \biggr) \biggl(m_{j}\sum _{k\neq j}|a_{kj}|b_{jj} \biggr) \biggr]^{\frac{1}{2}} \biggr\} . \end{aligned}
(3.1)

### Proof

It is evident that (3.1) is an equality for $$n=1$$.

We next assume that $$n\geq2$$.

Firstly, we assume that $$A^{-1}$$ is irreducible. By LemmaÂ 2.4, we have

\begin{aligned} a_{ii}=\sum _{j\neq i}|a_{ij}|+1=\sum _{j\neq i}|a_{ji}|+1,\quad i\in N, \end{aligned}

and

\begin{aligned} a_{ii}>1,\quad i\in N. \end{aligned}

Let

\begin{aligned} m_{j}=\max _{i\neq j} \{m_{ji} \}=\max _{i\neq j} \biggl\{ \frac{|a_{ji}|+\sum _{k\neq j,i}|a_{jk}|s_{ki}}{a_{jj}} \biggr\} ,\quad j\in N. \end{aligned}

Since A is an irreducible matrix, then $$0< m_{j}\leq1$$. Let $$\tau(A\circ A^{-1})=\lambda$$, so that $$0<\lambda<a_{ii}b_{ii}$$, $$i\in N$$. Thus, by LemmaÂ 2.5, there is a pair $$(i,j)$$ of positive integers with $$i\neq j$$ such that

\begin{aligned} |\lambda-a_{ii}b_{ii}||\lambda-a_{jj}b_{jj}| \leq& \biggl(m_{i}\sum _{k\neq i}\frac{1}{m_{k}}|a_{ki}b_{ki}| \biggr) \biggl(m_{j}\sum _{k\neq j}\frac{1}{m_{k}}|a_{kj}b_{kj}| \biggr) \\ \leq& \biggl(m_{i}\sum _{k\neq i}\frac{1}{m_{k}}|a_{ki}|m_{k}b_{ii} \biggr) \biggl(m_{j} \sum _{k\neq i}\frac{1}{m_{k}}|a_{kj}|m_{k}b_{jj} \biggr) \\ =& \biggl(m_{i}\sum _{k\neq i}|a_{ki}|b_{ii} \biggr) \biggl(m_{j}\sum _{k\neq j}|a_{kj}|b_{jj} \biggr). \end{aligned}
(3.2)

From inequality (3.2), we have

\begin{aligned} (\lambda-a_{ii}b_{ii}) (\lambda-a_{jj}b_{jj}) \leq \biggl(m_{i}\sum _{k\neq i}|a_{ki}|b_{ii} \biggr) \biggl(m_{j}\sum _{k\neq j}|a_{kj}|b_{jj} \biggr). \end{aligned}
(3.3)

Thus, (3.3) is equivalent to

\begin{aligned} \lambda \geq&\frac{1}{2} \biggl\{ a_{ii}b_{ii}+a_{jj}b_{jj}- \biggl[ (a_{ii}b_{ii}-a_{jj}b_{jj} )^{2}\\ &{}+4 \biggl(m_{i}\sum _{k\neq i}|a_{ki}|b_{ii} \biggr) \biggl(m_{j}\sum _{k\neq j}|a_{kj}|b_{jj} \biggr) \biggr]^{\frac{1}{2}} \biggr\} , \end{aligned}

that is,

\begin{aligned} \tau\bigl(A\circ A^{-1}\bigr) \geq&\frac{1}{2} \biggl\{ a_{ii}b_{ii}+a_{jj}b_{jj}- \biggl[ (a_{ii}b_{ii}-a_{jj}b_{jj} )^{2}\\ &{}+4 \biggl(m_{i}\sum _{k\neq i}|a_{ki}|b_{ii} \biggr) \biggl(m_{j}\sum _{k\neq j}|a_{kj}|b_{jj} \biggr) \biggr]^{\frac{1}{2}} \biggr\} \\ \geq&\min _{i\neq j}\frac{1}{2} \biggl\{ a_{ii}b_{ii}+a_{jj}b_{jj}- \biggl[ (a_{ii}b_{ii}-a_{jj}b_{jj} )^{2}\\ &{}+4 \biggl(m_{i}\sum _{k\neq i}|a_{ki}|b_{ii} \biggr) \biggl(m_{j}\sum _{k\neq j}|a_{kj}|b_{jj} \biggr) \biggr]^{\frac{1}{2}} \biggr\} . \end{aligned}

If A is reducible, without loss of generality, we may assume that A has the following block upper triangular form:

\begin{aligned} A= \begin{bmatrix} A_{11}&A_{12}&\cdots& A_{1s}\\ &A_{22}&\cdots& A_{2s}\\ & &\cdots& \cdots\\ & & & A_{ss} \end{bmatrix} \end{aligned}

with irreducible diagonal blocks $$A_{ii}$$, $$i=1,2,\ldots,s$$. Obviously, $$\tau(A\circ A^{-1})=\min _{i}\tau(A_{ii}\circ A_{ii}^{-1})$$. Thus, the problem of the reducible matrix A is reduced to those of irreducible diagonal blocks $$A_{ii}$$. The result of TheoremÂ 3.2 also holds.â€ƒâ–¡

### Theorem 3.3

Let $$A=(a_{ij})\in M_{n}$$ and $$A^{-1}=b_{ij}$$ be a doubly stochastic matrix. Then

\begin{aligned} &\min _{i\neq j}\frac{1}{2} \biggl\{ a_{ii}b_{ii}+a_{jj}b_{jj}- \biggl[ (a_{ii}b_{ii}-a_{jj}b_{jj} )^{2}\\ &\qquad{}+4 \biggl(m_{i}\sum _{k\neq i}|a_{ki}|b_{ii} \biggr) \biggl(m_{j}\sum _{k\neq j}|a_{kj}|b_{jj} \biggr) \biggr]^{\frac{1}{2}} \biggr\} \\ &\quad\geq\min _{i} \biggl\{ \frac{a_{ii}-s_{i}R_{i}}{1+\sum _{j\neq i}s_{ji}} \biggr\} . \end{aligned}

### Proof

Since $$A^{-1}$$ is a doubly stochastic matrix, by LemmaÂ 2.4, we have

\begin{aligned} a_{ii}=\sum _{k\neq i}|a_{ik}|+1=\sum _{k\neq i}|a_{ki}|+1,\quad i\in N. \end{aligned}

For any $$j\neq i$$, we have

\begin{aligned} d_{j}-s_{ji} =&\frac{R_{j}}{a_{jj}}-\frac{|a_{ji}|+\sum _{k\neq j,i}|a_{jk}|d_{k}}{a_{jj}} \\ =&\frac{|a_{ji}|+\sum _{k\neq j,i}|a_{jk}|}{a_{jj}}-\frac{|a_{ji}|+\sum _{k\neq j,i}|a_{jk}|d_{k}}{a_{jj}} \\ =&\frac{(1-d_{k})\sum _{k\neq j,i}|a_{jk}|}{a_{jj}}\geq0, \end{aligned}

or equivalently

\begin{aligned} d_{j}\geq s_{ji},\quad j\neq i, j\in N. \end{aligned}
(3.4)

So, we can obtain

\begin{aligned} m_{ji}=\frac{|a_{ji}|+\sum _{k\neq j,i}|a_{jk}|s_{ki}}{a_{jj}}\leq\frac{|a_{ji}|+\sum _{k\neq j,i}|a_{jk}|d_{k}}{a_{jj}}=s_{ji},\quad j \neq i, j\in N, \end{aligned}
(3.5)

and

\begin{aligned} m_{i}\leq s_{i},\quad i\in N. \end{aligned}

Without loss of generality, for $$i\neq j$$, assume that

\begin{aligned} a_{ii}b_{ii}-m_{i}\sum _{k\neq i}|a_{ki}|b_{ii} \leq a_{jj}b_{jj}-m_{j}\sum _{k\neq j}|a_{kj}|b_{jj}. \end{aligned}
(3.6)

Thus, (3.6) is equivalent to

\begin{aligned} m_{j}\sum _{k\neq j}|a_{kj}|b_{jj} \leq a_{jj}b_{jj}-a_{ii}b_{ii}+m_{i} \sum _{k\neq i}|a_{ki}|b_{ii}. \end{aligned}
(3.7)

From (3.1) and (3.7), we have

\begin{aligned} & \frac{1}{2} \biggl\{ a_{ii}b_{ii}+a_{jj}b_{jj}- \biggl[(a_{ii}b_{ii}-a_{jj}b_{jj})^{2} +4 \biggl(m_{i}\sum _{k\neq i}|a_{ki}|b_{ii} \biggr) \biggl(m_{j}\sum _{k\neq j} |a_{kj}|b_{jj} \biggr) \biggr]^{\frac{1}{2}} \biggr\} \\ &\quad \geq\frac{1}{2} \biggl\{ a_{ii}b_{ii}+a_{jj}b_{jj}- \biggl[(a_{ii}b_{ii}-a_{jj}b_{jj})^{2} \\ &\qquad{} +4 \biggl(m_{i}\sum _{k\neq i}|a_{ki}|b_{ii} \biggr) \biggl(a_{jj}b_{jj}-a_{ii}b_{ii}+m_{i} \sum _{k\neq i} |a_{ki}|b_{ii} \biggr) \biggr]^{\frac{1}{2}} \biggr\} \\ &\quad = \frac{1}{2} \biggl\{ a_{ii}b_{ii}+a_{jj}b_{jj}- \biggl[(a_{ii}b_{ii}-a_{jj}b_{jj})^{2} \\ &\qquad{} +4 \biggl(m_{i}\sum _{k\neq i}|a_{ki}|b_{ii} \biggr)^{2}+4 \biggl(m_{i}\sum _{k\neq i}|a_{ki}|b_{ii} \biggr) (a_{jj}b_{jj} -a_{ii}b_{ii} ) \biggr]^{\frac{1}{2}} \biggr\} \\ &\quad=\frac{1}{2} \biggl\{ a_{ii}b_{ii}+a_{jj}b_{jj}- \biggl[ \biggl(a_{jj}b_{jj}- a_{ii}b_{ii}+2m_{i} \sum _{k\neq i}|a_{ki}|b_{ii} \biggr)^{2} \biggr]^{\frac{1}{2}} \biggr\} \\ &\quad=\frac{1}{2} \biggl\{ a_{ii}b_{ii}+a_{jj}b_{jj}- \biggl(a_{jj}b_{jj}- a_{ii}b_{ii}+2m_{i} \sum _{k\neq i}|a_{ki}|b_{ii} \biggr) \biggr\} \\ &\quad= a_{ii}b_{ii}-m_{i}\sum _{k\neq i}|a_{ki}|b_{ii} \\ &\quad= b_{ii} \biggl(a_{ii}-m_{i}\sum _{k\neq i}|a_{ki}| \biggr)\\ &\quad\geq\frac{a_{ii}-m_{i}R_{i}}{1+\sum _{j\neq i}m_{ji}}\\ &\quad\geq\frac{a_{ii}-s_{i}R_{i}}{1+\sum _{j\neq i}s_{ji}}. \end{aligned}

Thus we have

\begin{aligned} &\min _{i\neq j}\frac{1}{2} \biggl\{ a_{ii}b_{ii}+a_{jj}b_{jj}- \biggl[ (a_{ii}b_{ii}-a_{jj}b_{jj} )^{2}+4 \biggl(m_{i}\sum _{k\neq i}|a_{ki}|b_{ii} \biggr) \biggl(m_{j}\sum _{k\neq j}|a_{kj}|b_{jj} \biggr) \biggr]^{\frac{1}{2}} \biggr\} \\ &\quad\geq\min _{i} \biggl\{ \frac{a_{ii}-s_{i}R_{i}}{1+\sum _{j\neq i}s_{ji}} \biggr\} . \end{aligned}

This proof is completed.â€ƒâ–¡

### Remark 3.1

According to inequality (3.4), it is easy to know that

\begin{aligned} b_{ji}\leq\frac{|a_{ji}|+\sum _{k\neq j,i}|a_{jk}|s_{ki}}{a_{jj}}b_{ii}\leq\frac{|a_{ji}|+\sum _{k\neq j,i}|a_{jk}|d_{k}}{a_{jj}}b_{ii},\quad j\in N. \end{aligned}

That is to say, the result of LemmaÂ 2.2 is sharper than that of TheoremÂ 2.1 in [8]. Moreover, the result of TheoremÂ 3.2 is sharper than that of TheoremÂ 3.1 in [8], respectively.

### Theorem 3.4

Let $$A=(a_{ij})\in\mathbb{R}^{n\times n}$$ be an irreducible strictly row diagonally dominant M-matrix. Then

\begin{aligned} \tau\bigl(A\circ A^{-1}\bigr)\geq\min _{i} \biggl\{ 1-\frac{1}{a_{ii}} \sum _{j\neq i}|a_{ji}|m_{ji} \biggr\} . \end{aligned}

### Proof

Since A is irreducible, then $$A^{-1}>0$$, and $$A\circ A^{-1}$$ is again irreducible. Note that

\begin{aligned} \tau\bigl(A\circ A^{-1}\bigr)=\tau\bigl(\bigl(A\circ A^{-1} \bigr)^{T}\bigr)=\tau\bigl(A^{T}\circ \bigl(A^{T} \bigr)^{-1}\bigr). \end{aligned}

Let

\begin{aligned} \bigl(A^{T}\circ\bigl(A^{T}\bigr)^{-1} \bigr)e=(t_{1},t_{2},\ldots,t_{n})^{T}, \end{aligned}

where $$e=(1,1,\ldots,1)^{T}$$. Without loss of generality, we may assume that $$t_{1}=\min _{i} \{t_{i} \}$$, by LemmaÂ 2.2, we have

\begin{aligned} t_{1} =&\sum _{j=1}^{n}|a_{j1}b_{j1}|=a_{11}b_{11}- \sum _{j\neq 1}|a_{j1}|b_{j1} \\ \geq&a_{11}b_{11}-\sum _{j\neq 1}|a_{j1}| \frac{|a_{j1}|+\sum _{k\neq j,1}|a_{jk}|s_{k1}}{a_{jj}}b_{11} \\ =&a_{11}b_{11}-\sum _{j\neq1}|a_{j1}|m_{j1}b_{11} \\ =&\biggl(a_{11}-\sum _{j\neq1}|a_{j1}|m_{j1} \biggr)b_{11} \\ \geq&\frac{a_{11}-\sum _{j\neq1}|a_{j1}|m_{j1}}{a_{11}} \\ =&1-\frac{1}{a_{11}}\sum _{j\neq1}|a_{j1}|m_{j1}. \end{aligned}

Therefore, by LemmaÂ 2.6, we have

\begin{aligned} \tau\bigl(A\circ A^{-1}\bigr)\geq\min _{i} \biggl\{ 1-\frac{1}{a_{ii}} \sum _{j\neq i}|a_{ji}|m_{ji} \biggr\} . \end{aligned}

This proof is completed.â€ƒâ–¡

### Remark 3.2

According to inequality (3.5), we can get

\begin{aligned} 1-\frac{1}{a_{ii}}\sum _{j\neq i}|a_{ji}|m_{ji} \geq1-\frac{1}{a_{ii}}\sum _{j\neq i}|a_{ji}|s_{ji}. \end{aligned}

That is to say, the bound of TheoremÂ 3.4 is sharper than the bound of TheoremÂ 3.5 inÂ [8].

### Remark 3.3

If A is an M-matrix, we know that there exists a diagonal matrix D with positive diagonal entries such that $$D^{-1}AD$$ is a strictly row diagonally dominant M-matrix. So the result of TheoremÂ 3.4 also holds for a general M-matrix.

## 4 Example

Consider the following M-matrix:

$$A= \begin{bmatrix} 4&-1&-1&-1\\ -2&5&-1&-1\\ 0&-2&4&-1\\ -1&-1&-1&4 \end{bmatrix}.$$

Since $$Ae=e$$ and $$A^{T}e=e$$, $$A^{-1}$$ is doubly stochastic. By calculations we have

$$A^{-1}= \begin{bmatrix} 0.4000&0.2000&0.2000&0.2000\\ 0.2333&0.3667&0.2000&0.2000\\ 0.1667&0.2333&0.4000&0.2000\\ 0.2000&0.2000&0.2000&0.4000 \end{bmatrix}.$$

(1) Estimate the upper bounds for entries of $$A^{-1}=(b_{ij})$$ . If we apply TheoremÂ 2.1(a) of [8], we have

$$A^{-1}\leq \begin{bmatrix} 1&0.6250&0.6375&0.6375\\ 0.7000&1&0.6500&0.6500\\ 0.5875&0.6875&1&0.6500\\ 0.6375&0.6250&0.6375&1 \end{bmatrix} \circ \begin{bmatrix} b_{11}&b_{22}&b_{33}&b_{44}\\ b_{11}&b_{22}&b_{33}&b_{44}\\ b_{11}&b_{22}&b_{33}&b_{44}\\ b_{11}&b_{22}&b_{33}&b_{44} \end{bmatrix}.$$

If we apply LemmaÂ 2.2, we have

$$A^{-1}\leq \begin{bmatrix} 1&0.5781&0.5718&0.5750\\ 0.6450&1&0.5825&0.5850\\ 0.5093&0.6562&1&0.5750\\ 0.5718&0.5781&0.5718&1 \end{bmatrix} \circ \begin{bmatrix} b_{11}&b_{22}&b_{33}&b_{44}\\ b_{11}&b_{22}&b_{33}&b_{44}\\ b_{11}&b_{22}&b_{33}&b_{44}\\ b_{11}&b_{22}&b_{33}&b_{44} \end{bmatrix}.$$

Combining the result of LemmaÂ 2.2 with the result of TheoremÂ 2.1(a) of [8], we see that the result of LemmaÂ 2.2 is the best.

By TheoremÂ 2.3 and LemmaÂ 3.2 of [8], we can get the following bounds for the diagonal entries of $$A^{-1}$$:

\begin{aligned}[b] &0.3419\leq b_{11}\leq0.5882;\qquad 0.3404\leq b_{22}\leq0.5128,\\ &0.3419\leq b_{33}\leq0.6061; \qquad 0.3404\leq b_{44}\leq0.5882. \end{aligned}

By LemmaÂ 2.3 and TheoremÂ 3.1, we obtain

\begin{aligned}& 0.3668\leq b_{11}\leq0.4397;\qquad 0.3556\leq b_{22}\leq0.3832, \\& 0.3668\leq b_{33}\leq0.4419;\qquad 0.3656\leq b_{44}\leq0.4415. \end{aligned}

(2) Lower bounds for $$\tau(A\circ A^{-1})$$.

By the conjecture of Fiedler and Markham, we have

\begin{aligned} \tau \bigl(A\circ A^{-1} \bigr)\geq\frac{2}{n}= \frac{1}{2}=0.5 . \end{aligned}

By TheoremÂ 3.1 of [8], we have

\begin{aligned} \tau\bigl(A\circ A^{-1}\bigr)\geq\min _{i} \biggl\{ \frac{a_{ii}-s_{i}R_{i}}{1+\sum _{j\neq i}s_{ji}} \biggr\} =0.6624. \end{aligned}

By CorollaryÂ 2.5 of [9], we have

\begin{aligned} \tau\bigl(A\circ A^{-1}\bigr)\geq1-\rho^{2}(J_{A})=0.4145. \end{aligned}

By TheoremÂ 3.1 of [10], we have

\begin{aligned} \tau\bigl(A\circ A^{-1}\bigr)\geq\min _{i} \biggl\{ \frac{a_{ii}-u_{i}R_{i}}{1+\sum _{j\neq i}u_{ji}} \biggr\} =0.8250. \end{aligned}

By CorollaryÂ 2 of [11], we have

\begin{aligned} \tau\bigl(A\circ A^{-1}\bigr)\geq\min _{i} \biggl\{ \frac{a_{ii}-w_{i}\sum _{j\neq i}\mid a_{ji}\mid}{1+\sum _{j\neq i}w_{ji}} \biggr\} =0.8321. \end{aligned}

If we apply TheoremÂ 3.2, we have

\begin{aligned} \tau\bigl(A\circ A^{-1}\bigr) \geq&\min _{i\neq j}\frac{1}{2} \biggl\{ a_{ii}b_{ii}+a_{jj}b_{jj}- \biggl[ (a_{ii}b_{ii}-a_{jj}b_{jj} )^{2}\\ &{}+4 \biggl(m_{i}\sum _{k\neq i}|a_{ki}|b_{ii} \biggr) \biggl(m_{j}\sum _{k\neq j}|a_{kj}|b_{jj} \biggr) \biggr]^{\frac{1}{2}} \biggr\} =0.8456. \end{aligned}

The numerical example shows that the bound of TheoremÂ 3.2 is better than these corresponding bounds in [8â€“11].

## References

1. Berman, A, Plemmons, RJ: Nonnegative Matrices in the Mathematical Sciences. SIAM, Philadelphia (1979)

2. Horn, RA, Johnson, CR: Topics in Matrix Analysis. Cambridge University Press, Cambridge (1991)

3. Fiedler, M, Markham, TL: An inequality for the Hadamard product of an M-matrix and inverse M-matrix. Linear Algebra Appl. 101, 1-8 (1988)

4. Fiedler, M, Johnson, CR, Markham, TL, Neumann, M: A trace inequality for M-matrix and the symmetrizability of a real matrix by a positive diagonal matrix. Linear Algebra Appl. 71, 81-94 (1985)

5. Chen, SC: A lower bound for the minimum eigenvalue of the Hadamard product of matrices. Linear Algebra Appl. 378, 159-166 (2004)

6. Song, YZ: On an inequality for the Hadamard product of an M-matrix and its inverse. Linear Algebra Appl. 305, 99-105 (2000)

7. Yong, XR: Proof of a conjecture of Fiedler and Markham. Linear Algebra Appl. 320, 167-171 (2000)

8. Li, HB, Huang, TZ, Shen, SQ, Li, H: Lower bounds for the minimum eigenvalue of Hadamard product of an M-matrix and its inverse. Linear Algebra Appl. 420, 235-247 (2007)

9. Zhou, DM, Chen, GL, Wu, GX, Zhang, XY: Some inequalities for the Hadamard product of an M-matrix and an inverse M-matrix. J. Inequal. Appl. 2013, 16 (2013)

10. Cheng, GH, Tan, Q, Wang, ZD: Some inequalities for the minimum eigenvalue of the Hadamard product of an M-matrix and its inverse. J. Inequal. Appl. 2013, 65 (2013)

11. Li, YT, Wang, F, Li, CQ, Zhao, JX: Some new bounds for the minimum eigenvalue of the Hadamard product of an M-matrix and an inverse M-matrix. J. Inequal. Appl. 2013, 480 (2013)

12. Yong, XR, Wang, Z: On a conjecture of Fiedler and Markham. Linear Algebra Appl. 288, 259-267 (1999)

13. Horn, RA, Johnson, CR: Matrix Analysis. Cambridge University Press, Cambridge (1985)

## Acknowledgements

The author is grateful to the referees for their useful and constructive suggestions. This research is supported by the Scientific Research Fund of Yunnan Provincial Education Department (2013C165).

## Author information

Authors

### Corresponding author

Correspondence to Fu-bin Chen.

### Competing interests

The author declares that he has no competing interests.

## Rights and permissions

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Reprints and Permissions

Chen, Fb. New inequalities for the Hadamard product of an M-matrix and its inverse. J Inequal Appl 2015, 35 (2015). https://doi.org/10.1186/s13660-015-0555-1