# Some inequalities for the minimum eigenvalue of the Hadamard product of an M-matrix and its inverse

## Abstract

In this paper, some new inequalities for the minimum eigenvalue of the Hadamard product of an M-matrix and its inverse are given. These inequalities are sharper than the well-known results. A simple example is shown.

AMS Subject Classification:15A18, 15A42.

## 1 Introduction

A matrix $A=\left({a}_{ij}\right)\in {\mathbb{R}}^{n×n}$ is called a nonnegative matrix if ${a}_{ij}\ge 0$. A matrix $A\in {\mathbb{R}}^{n×n}$ is called a nonsingular M-matrix  if there exist $B\ge 0$ and $s>0$ such that

$A=s{I}_{n}-B\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}s>\rho \left(B\right),$

where $\rho \left(B\right)$ is a spectral radius of the nonnegative matrix B, ${I}_{n}$ is the $n×n$ identity matrix. Denote by ${\mathcal{M}}_{n}$ the set of all $n×n$ nonsingular M-matrices. The matrices in ${\mathcal{M}}_{n}^{-1}:=\left\{{A}^{-1}:A\in {\mathcal{M}}_{n}\right\}$ are called inverse M-matrices. Let us denote

$\tau \left(A\right)=min\left\{Re\lambda :\lambda \in \sigma \left(A\right)\right\},$

and $\sigma \left(A\right)$ denotes the spectrum of A. It is known that 

$\tau \left(A\right)=\frac{1}{\rho \left({A}^{-1}\right)}$

is a positive real eigenvalue of $A\in {\mathcal{M}}_{n}$ and the corresponding eigenvector is nonnegative. Indeed

$\tau \left(A\right)=s-\rho \left(B\right),$

if $A=s{I}_{n}-B$, where $s>\rho \left(B\right)$, $B\ge 0$.

For any two $n×n$ matrices $A=\left({a}_{ij}\right)$ and $B=\left({b}_{ij}\right)$, the Hadamard product of A and B is $A\circ B=\left({a}_{ij}{b}_{ij}\right)$. If $A,B\in {\mathcal{M}}_{n}$, then $A\circ {B}^{-1}$ is also an M-matrix .

A matrix A is irreducible if there does not exist a permutation matrix P such that

$PA{P}^{T}=\left[\begin{array}{cc}{A}_{1,1}& {A}_{1,2}\\ 0& {A}_{2,2}\end{array}\right],$

where ${A}_{1,1}$ and ${A}_{2,2}$ are square matrices.

For convenience, the set $\left\{1,2,\dots ,n\right\}$ is denoted by N, where n (≥3) is any positive integer. Let $A=\left({a}_{ij}\right)\in {\mathbb{R}}^{n×n}$ be a strictly diagonally dominant by row, denote Recently, some lower bounds for the minimum eigenvalue of the Hadamard product of an M-matrix and an inverse M-matrix have been proposed. Let $A\in {\mathcal{M}}_{n}$, for example, $\tau \left(A\circ {A}^{-1}\right)\le 1$ has been proven by Fiedler et al. in . Subsequently, $\tau \left(A\circ {A}^{-1}\right)>\frac{1}{n}$ was given by Fiedler and Markham in , and they conjectured that $\tau \left(A\circ {A}^{-1}\right)>\frac{2}{n}$. Song , Yong  and Chen  have independently proven this conjecture. In , Li et al. improved the conjecture $\tau \left(A\circ {A}^{-1}\right)\ge \frac{2}{n}$ when ${A}^{-1}$ is a doubly stochastic matrix and gave the following result:

$\tau \left(A\circ {A}^{-1}\right)\ge \underset{i}{min}\left\{\frac{{a}_{ii}-{s}_{i}{R}_{i}}{1+{\sum }_{j\ne i}{s}_{ji}}\right\}.$

In , Li et al. gave the following result:

$\tau \left(A\circ {A}^{-1}\right)\ge \underset{i}{min}\left\{\frac{{a}_{ii}-{m}_{i}{R}_{i}}{1+{\sum }_{j\ne i}{m}_{ji}}\right\}.$

Furthermore, if ${a}_{11}={a}_{22}=\cdots ={a}_{nn}$, they have obtained

$\underset{i}{min}\left\{\frac{{a}_{ii}-{m}_{i}{R}_{i}}{1+{\sum }_{j\ne i}{m}_{ji}}\right\}\ge \underset{i}{min}\left\{\frac{{a}_{ii}-{s}_{i}{R}_{i}}{1+{\sum }_{j\ne i}{s}_{ji}}\right\},$

i.e., under this condition, the bound of  is better than the one of .

In this paper, our motives are to improve the lower bounds for the minimum eigenvalue $\tau \left(A\circ {A}^{-1}\right)$. The main ideas are based on the ones of  and .

## 2 Some preliminaries and notations

In this section, we give some notations and lemmas which mainly focus on some inequalities for the entries of the inverse M-matrix and the strictly diagonally dominant matrix.

Lemma 2.1 

Let $A\in {\mathbb{R}}^{n×n}$ be a strictly diagonally dominant matrix by row, i.e.,

$|{a}_{ii}|>\sum _{j\ne i}|{a}_{ij}|,\phantom{\rule{1em}{0ex}}\mathrm{\forall }i\in N.$

If ${A}^{-1}=\left({b}_{ij}\right)$, then

$|{b}_{ji}|\le \frac{{\sum }_{k\ne j}|{a}_{jk}|}{|{a}_{jj}|}|{b}_{ii}|,\phantom{\rule{1em}{0ex}}j\ne i,\mathrm{\forall }j\in N.$

Lemma 2.2 Let $A\in {\mathbb{R}}^{n×n}$ be a strictly diagonally dominant M-matrix by row. If ${A}^{-1}=\left({b}_{ij}\right)$, then

${b}_{ji}\le \frac{|{a}_{ji}|+{\sum }_{k\ne j,i}|{a}_{jk}|{m}_{ki}}{{a}_{jj}}{b}_{ii}\le {u}_{j}{b}_{ii},\phantom{\rule{1em}{0ex}}j\ne i,\mathrm{\forall }i\in N.$

Proof Firstly, we consider $A\in {\mathbb{R}}^{n×n}$ is a strictly diagonally dominant M-matrix by row. For $i\in N$, let

${r}_{i}\left(\epsilon \right)=\underset{j\ne i}{max}\left\{\frac{|{a}_{ji}|+\epsilon }{{a}_{jj}-{\sum }_{k\ne j,i}|{a}_{jk}|}\right\}$

and

${m}_{ji}\left(\epsilon \right)=\frac{{r}_{i}\left(\epsilon \right)\left({\sum }_{k\ne j,i}|{a}_{jk}|+\epsilon \right)+|{a}_{ji}|}{{a}_{jj}},\phantom{\rule{1em}{0ex}}j\ne i.$

Since A is strictly diagonally dominant, then ${r}_{ji}<1$ and ${m}_{ji}<1$. Therefore, there exists $\epsilon >0$ such that $0<{r}_{i}\left(\epsilon \right)<1$ and $0<{m}_{ji}\left(\epsilon \right)<1$. Let us define one positive diagonal matrix

${M}_{i}\left(\epsilon \right)=diag\left({m}_{1i}\left(\epsilon \right),\dots ,{m}_{i-1,i}\left(\epsilon \right),1,{m}_{i+1,i}\left(\epsilon \right),\dots ,{m}_{ni}\left(\epsilon \right)\right).$

Similarly to the proofs of Theorem 2.1 and Theorem 2.4 in , we can prove that the matrix $A{M}_{i}\left(\epsilon \right)$ is also a strictly diagonally dominant M-matrix by row for any $i\in N$. Furthermore, by Lemma 2.1, we can obtain the following result:

${m}_{ji}^{-1}\left(\epsilon \right){b}_{ji}\le \frac{|{a}_{ji}|+{\sum }_{k\ne j,i}|{a}_{jk}|{m}_{ki}\left(\epsilon \right)}{{m}_{ji}\left(\epsilon \right){a}_{jj}}{b}_{ii},\phantom{\rule{1em}{0ex}}j\ne i,j\in N,$

i.e.,

${b}_{ji}\le \frac{|{a}_{ji}|+{\sum }_{k\ne j,i}|{a}_{jk}|{m}_{ki}\left(\epsilon \right)}{{a}_{jj}}{b}_{ii},\phantom{\rule{1em}{0ex}}j\ne i,j\in N.$

Let $\epsilon ⟶{0}^{+}$ to get

${b}_{ji}\le \frac{|{a}_{ji}|+{\sum }_{k\ne j,i}|{a}_{jk}|{m}_{ki}}{{a}_{jj}}{b}_{ii}\le {u}_{j}{b}_{ii},\phantom{\rule{1em}{0ex}}j\ne i,j\in N.$

This proof is completed. □

Lemma 2.3 Let $A=\left({a}_{ij}\right)\in {\mathcal{M}}_{n}$ be a strictly diagonally dominant matrix by row and ${A}^{-1}=\left({b}_{ij}\right)$, then we have

$\frac{1}{{a}_{ii}}\le {b}_{ii}\le \frac{1}{{a}_{ii}-{\sum }_{j\ne i}|{a}_{ij}|{u}_{ji}},\phantom{\rule{1em}{0ex}}\mathrm{\forall }i\in N.$

Proof Let $B={A}^{-1}$. Since A is an M-matrix, then $B\ge 0$. By $AB=BA={I}_{n}$, we have

$1=\sum _{j=1}^{n}{a}_{ij}{b}_{ji}={a}_{ii}{b}_{ii}-\sum _{j\ne i}|{a}_{ij}|{b}_{ji},\phantom{\rule{1em}{0ex}}\mathrm{\forall }i\in N.$

Hence

$1\le {a}_{ii}{b}_{ii},\phantom{\rule{1em}{0ex}}\mathrm{\forall }i\in N,$

or equivalently,

$\frac{1}{{a}_{ii}}\le {b}_{ii},\phantom{\rule{1em}{0ex}}\mathrm{\forall }i\in N.$

Furthermore, by Lemma 2.2, we get

$1={a}_{ii}{b}_{ii}-\sum _{j\ne i}|{a}_{ij}|{b}_{ji}\ge \left({a}_{ii}-\sum _{j\ne i}|{a}_{ij}|{u}_{ji}\right){b}_{ii},\phantom{\rule{1em}{0ex}}\mathrm{\forall }i\in N,$

i.e.,

${b}_{ii}\le \frac{1}{{a}_{ii}-{\sum }_{j\ne i}|{a}_{ij}|{u}_{ji}},\phantom{\rule{1em}{0ex}}\mathrm{\forall }i\in N.$

Thus the proof is completed. □

Lemma 2.4 

Let $A\in {\mathbb{C}}^{n×n}$ and ${x}_{1},{x}_{2},\dots ,{x}_{n}$ be positive real numbers. Then all the eigenvalues of A lie in the region

$\bigcup _{1}^{n}\left\{z\in C:|z-{a}_{ii}|\le {x}_{i}\sum _{j\ne i}\frac{1}{{x}_{j}}|{a}_{ji}|\right\}.$

Lemma 2.5 

If ${A}^{-1}$ is a doubly stochastic matrix, then $Ae=e$, ${A}^{T}e=e$, where $e={\left(1,1,\dots ,1\right)}^{T}$.

## 3 Main results

In this section, we give two new lower bounds for $\tau \left(A\circ {A}^{-1}\right)$ which improve the ones in  and .

Lemma 3.1 If $A\in {\mathcal{M}}_{n}$ and ${A}^{-1}=\left({b}_{ij}\right)$ is a doubly stochastic matrix, then

${b}_{ii}\ge \frac{1}{1+{\sum }_{j\ne i}{u}_{ji}},\phantom{\rule{1em}{0ex}}\mathrm{\forall }i\in N.$

Proof This proof is similar to the ones of Lemma 3.2 in  and Theorem 3.2 in . □

Theorem 3.1 Let $A\in {\mathcal{M}}_{n}$ and ${A}^{-1}=\left({b}_{ij}\right)$ be a doubly stochastic matrix. Then

$\tau \left(A\circ {A}^{-1}\right)\ge \underset{i}{min}\left\{\frac{{a}_{ii}-{u}_{i}{R}_{i}}{1+{\sum }_{j\ne i}{u}_{ji}}\right\}.$

Proof Firstly, we assume that A is irreducible. By Lemma 2.5, we have

${a}_{ii}=\sum _{j\ne i}|{a}_{ij}|+1=\sum _{j\ne i}|{a}_{ji}|+1\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}{a}_{ii}>1,\phantom{\rule{1em}{0ex}}i\in N.$

Denote

${u}_{j}=\underset{i\ne j}{max}\left\{{u}_{ji}\right\}=max\left\{\frac{|{a}_{ji}|+{\sum }_{k\ne j,i}|{a}_{jk}|{m}_{ki}}{{a}_{jj}}\right\},\phantom{\rule{1em}{0ex}}j\in N.$

Since A is an irreducible matrix, we know that $0<{u}_{j}\le 1$. So, by Lemma 2.4, there exists ${i}_{0}\in N$ such that

$|\lambda -{a}_{{i}_{0}{i}_{0}}{b}_{{i}_{0}{i}_{0}}|\le {u}_{{i}_{0}}\sum _{j\ne {i}_{0}}\frac{1}{{u}_{j}}|{a}_{j{i}_{0}}{b}_{j{i}_{0}}|,$

or equivalently,

Secondly, if A is reducible, without loss of generality, we may assume that A has the following block upper triangular form:

$A=\left[\begin{array}{cccc}{A}_{11}& {A}_{12}& \cdots & {A}_{1K}\\ 0& {A}_{22}& \cdots & {A}_{2K}\\ 0& 0& \cdots & \cdots \\ 0& 0& 0& {A}_{KK}\end{array}\right],$

where ${A}_{ii}\in {\mathcal{M}}_{{n}_{i}}$ is an irreducible diagonal block matrix, $i=1,2,\dots ,K$. Obviously, $\tau \left(A\circ {A}^{-1}\right)={min}_{i}\tau \left({A}_{ii}\circ {A}_{ii}^{-1}\right)$. Thus the reducible case is converted into the irreducible case. This proof is completed. □

Theorem 3.2 If $A=\left({a}_{ij}\right)\in {\mathcal{M}}_{n}$ is a strictly diagonally dominant by row, then

$\underset{i}{min}\left\{\frac{{a}_{ii}-{u}_{i}{R}_{i}}{1+{\sum }_{j\ne i}{u}_{ji}}\right\}\ge \underset{i}{min}\left\{\frac{{a}_{ii}-{s}_{i}{R}_{i}}{1+{\sum }_{j\ne i}{s}_{ji}}\right\}.$

Proof Since A is strictly diagonally dominant by row, for any $j\ne i$, we have

$\begin{array}{rcl}{d}_{j}-{m}_{ji}& =& \frac{|{a}_{ji}|+{\sum }_{k\ne j,i}|{a}_{jk}|}{{a}_{jj}}-\frac{|{a}_{ji}|+{\sum }_{k\ne j,i}|{a}_{jk}|{r}_{i}}{{a}_{jj}}\\ =& \frac{\left(1-{r}_{i}\right){\sum }_{k\ne j,i}|{a}_{jk}|}{{a}_{jj}}\\ \ge & 0,\end{array}$

or equivalently,

${d}_{j}\ge {m}_{ji},\phantom{\rule{1em}{0ex}}j\ne i,\mathrm{\forall }j\in N.$
(1)

So, we can obtain

${u}_{ji}=\frac{|{a}_{ji}|+{\sum }_{k\ne j,i}|{a}_{jk}|{m}_{ki}}{{a}_{jj}}\le \frac{|{a}_{ji}|+{\sum }_{k\ne j,i}|{a}_{jk}|{d}_{k}}{{a}_{jj}}={s}_{ji},\phantom{\rule{1em}{0ex}}j\ne i,\mathrm{\forall }j\in N,$
(2)

and

${u}_{i}\le {s}_{i},\phantom{\rule{1em}{0ex}}\mathrm{\forall }i\in N.$

Therefore, it is easy to obtain that

$\frac{{a}_{ii}-{u}_{i}{R}_{i}}{1+{\sum }_{j\ne i}{u}_{ji}}\ge \frac{{a}_{ii}-{s}_{i}{R}_{i}}{1+{\sum }_{j\ne i}{s}_{ji}},\phantom{\rule{1em}{0ex}}\mathrm{\forall }i\in N.$

Obviously, we have the desired result

$\underset{i}{min}\left\{\frac{{a}_{ii}-{u}_{i}{R}_{i}}{1+{\sum }_{j\ne i}{u}_{ji}}\right\}\ge \underset{i}{min}\left\{\frac{{a}_{ii}-{s}_{i}{R}_{i}}{1+{\sum }_{j\ne i}{s}_{ji}}\right\}.$

This proof is completed. □

Theorem 3.3 If $A=\left({a}_{ij}\right)\in {\mathcal{M}}_{n}$ is strictly diagonally dominant by row, then

$\underset{i}{min}\left\{\frac{{a}_{ii}-{u}_{i}{R}_{i}}{1+{\sum }_{j\ne i}{u}_{ji}}\right\}\ge \underset{i}{min}\left\{\frac{{a}_{ii}-{m}_{i}{R}_{i}}{1+{\sum }_{j\ne i}{m}_{ji}}\right\}.$

Proof Since A is strictly diagonally dominant by row, for any $j\ne i$, we have

$\begin{array}{rcl}{r}_{i}-{m}_{ji}& =& {r}_{i}-\frac{|{a}_{ji}|+{\sum }_{k\ne j,i}|{a}_{jk}|{r}_{i}}{{a}_{jj}}\\ =& \frac{{r}_{i}-|{a}_{ji}|-{\sum }_{k\ne j,i}|{a}_{jk}|}{{a}_{jj}}\\ =& \frac{{r}_{i}\left({a}_{jj}-{\sum }_{k\ne j,i}|{a}_{jk}|\right)-|{a}_{ji}|}{{a}_{jj}}\\ =& \frac{{a}_{jj}-{\sum }_{k\ne j,i}|{a}_{jk}|}{{a}_{jj}}\left({r}_{i}-\frac{|{a}_{ji}|}{{a}_{jj}-{\sum }_{k\ne j,i}|{a}_{jk}|}\right)\\ \ge & 0,\end{array}$

i.e.,

${r}_{i}\ge {m}_{ji},\phantom{\rule{1em}{0ex}}j\ne i,\mathrm{\forall }j\in N.$
(3)

So, we can obtain

${u}_{ji}=\frac{|{a}_{ji}|+{\sum }_{k\ne j,i}|{a}_{jk}|{m}_{ki}}{{a}_{jj}}\le \frac{|{a}_{ji}|+{\sum }_{k\ne j,i}|{a}_{jk}|{r}_{i}}{{a}_{jj}}={m}_{ji},\phantom{\rule{1em}{0ex}}j\ne i,\mathrm{\forall }j\in N,$
(4)

and

${u}_{i}\le {m}_{i},\phantom{\rule{1em}{0ex}}\mathrm{\forall }i\in N.$

Therefore, it is easy to obtain that

$\frac{{a}_{ii}-{u}_{i}{R}_{i}}{1+{\sum }_{j\ne i}{u}_{ji}}\ge \frac{{a}_{ii}-{m}_{i}{R}_{i}}{1+{\sum }_{j\ne i}{m}_{ji}},\phantom{\rule{1em}{0ex}}\mathrm{\forall }i\in N.$

Obviously, we have the desired result

$\underset{i}{min}\left\{\frac{{a}_{ii}-{u}_{i}{R}_{i}}{1+{\sum }_{j\ne i}{u}_{ji}}\right\}\ge \underset{i}{min}\left\{\frac{{a}_{ii}-{m}_{i}{R}_{i}}{1+{\sum }_{j\ne i}{m}_{ji}}\right\}.$

□

Remark 3.1 According to inequalities (1) and (3), it is easy to know that

${b}_{ji}\le \frac{|{a}_{ji}|+{\sum }_{k\ne j,i}|{a}_{jk}|{m}_{ki}}{{a}_{jj}}{b}_{ii}\le \frac{|{a}_{ji}|+{\sum }_{k\ne j,i}|{a}_{jk}|{d}_{k}}{{a}_{jj}}{b}_{ii},\phantom{\rule{1em}{0ex}}\mathrm{\forall }i\in N.$

and

${b}_{ji}\le \frac{|{a}_{ji}|+{\sum }_{k\ne j,i}|{a}_{jk}|{m}_{ki}}{{a}_{jj}}{b}_{ii}\le \frac{|{a}_{ji}|+{\sum }_{k\ne j,i}|{a}_{jk}|{r}_{i}}{{a}_{jj}}{b}_{ii},\phantom{\rule{1em}{0ex}}\mathrm{\forall }i\in N.$

That is to say, the result of Lemma 2.2 is sharper than the ones of Theorem 2.1 in  and Lemma 2.2 in . Moreover, the results of Theorem 3.2 and Theorem 3.3 are sharper than the ones of Theorem 3.1 in  and Theorem 3.3 in , respectively.

Theorem 3.4 If $A\in {\mathcal{M}}_{n}$ is strictly diagonally dominant by row, then

$\tau \left(A\circ {A}^{-1}\right)\ge \underset{i}{min}\left\{1-\frac{1}{{a}_{ii}}\sum _{j\ne i}|{a}_{ji}|{u}_{ji}\right\}.$

Proof This proof is similar to the one of Theorem 3.5 in . □

Remark 3.2 According to inequalities (2) and (4), we get

$1-\frac{1}{{a}_{ii}}\sum _{j\ne i}|{a}_{ji}|{u}_{ji}\ge 1-\frac{1}{{a}_{ii}}\sum _{j\ne i}|{a}_{ji}|{s}_{ji},$

and

$1-\frac{1}{{a}_{ii}}\sum _{j\ne i}|{a}_{ji}|{u}_{ji}\ge 1-\frac{1}{{a}_{ii}}\sum _{j\ne i}|{a}_{ji}|{m}_{ji}.$

That is to say, the bound of Theorem 3.4 is sharper than the ones of Theorem 3.5 in  and Theorem 3.4 in , respectively.

Remark 3.3 Using the above similar ideas, we can obtain similar inequalities of the strictly diagonally M-matrix by column.

## 4 Example

For convenience, we consider the M-matrix A is the same as the matrix of . Define the M-matrix A as follows:

$A=\left[\begin{array}{cccc}4& -1& -1& -1\\ -2& 5& -1& -1\\ 0& -2& 4& -1\\ -1& -1& -1& 4\end{array}\right].$
1. 1.

Estimate the upper bounds for entries of ${A}^{-1}=\left({b}_{ij}\right)$. Firstly, by Lemma 2.2(2) in , we have

${A}^{-1}\le \left[\begin{array}{cccc}1& 0.5833& 0.5000& 0.5000\\ 0.6667& 1& 0.5000& 0.5000\\ 0.5000& 0.6667& 1& 0.5000\\ 0.5833& 0.5833& 0.5000& 1\end{array}\right]\circ \left[\begin{array}{cccc}{b}_{11}& {b}_{22}& {b}_{33}& {b}_{44}\\ {b}_{11}& {b}_{22}& {b}_{33}& {b}_{44}\\ {b}_{11}& {b}_{22}& {b}_{33}& {b}_{44}\\ {b}_{11}& {b}_{22}& {b}_{33}& {b}_{44}\end{array}\right].$

By Lemma 2.2, we have

${A}^{-1}\le \left[\begin{array}{cccc}1& 0.5625& 0.5000& 0.5000\\ 0.6167& 1& 0.5000& 0.5000\\ 0.4792& 0.6458& 1& 0.5000\\ 0.5417& 0.5625& 0.5000& 1\end{array}\right]\circ \left[\begin{array}{cccc}{b}_{11}& {b}_{22}& {b}_{33}& {b}_{44}\\ {b}_{11}& {b}_{22}& {b}_{33}& {b}_{44}\\ {b}_{11}& {b}_{22}& {b}_{33}& {b}_{44}\\ {b}_{11}& {b}_{22}& {b}_{33}& {b}_{44}\end{array}\right].$

By Lemma 2.3 and Theorem 3.1 in , we get By Lemma 2.3 and Lemma 3.1, we get 1. 2.

Lower bounds for $\tau \left(A\circ {A}^{-1}\right)$.

By Theorem 3.2 in , we obtain

$0.9755=\tau \left(A\circ {A}^{-1}\right)\ge 0.8000.$

By Theorem 3.1, we obtain

$0.9755=\tau \left(A\circ {A}^{-1}\right)\ge 0.8250.$

## References

1. Berman A, Plemmons RJ Classics in Applied Mathematics 9. In Nonnegative Matrices in the Mathematical Sciences. SIAM, Philadelphia; 1994.

2. Horn RA, Johnson CR: Topics in Matrix Analysis. Cambridge University Press, Cambridge; 1991.

3. Fiedler M, Markham TL: An inequality for the Hadamard product of an M -matrix and inverse M -matrix. Linear Algebra Appl. 1988, 101: 1–8.

4. Fiedler M, Johnson CR, Markham T, Neumann M: A trace inequality for M -matrices and the symmetrizability of a real matrix by a positive diagonal matrix. Linear Algebra Appl. 1985, 71: 81–94.

5. Song YZ: On an inequality for the Hadamard product of an M -matrix and its inverse. Linear Algebra Appl. 2000, 305: 99–105. 10.1016/S0024-3795(99)00224-4

6. Yong XR: Proof of a conjecture of Fiedler and Markham. Linear Algebra Appl. 2000, 320: 167–171. 10.1016/S0024-3795(00)00211-1

7. Chen SC: A lower bound for the minimum eigenvalue of the Hadamard product of matrix. Linear Algebra Appl. 2004, 378: 159–166.

8. Li HB, Huang TZ, Shen SQ, Li H: Lower bounds for the eigenvalue of Hadamard product of an M -matrix and its inverse. Linear Algebra Appl. 2007, 420: 235–247. 10.1016/j.laa.2006.07.008

9. Li YT, Chen FB, Wang DF: New lower bounds on eigenvalue of the Hadamard product of an M -matrix and its inverse. Linear Algebra Appl. 2009, 430: 1423–1431. 10.1016/j.laa.2008.11.002

10. Varga RS: Minimal Gerschgorin sets. Pac. J. Math. 1965, 15(2):719–729. 10.2140/pjm.1965.15.719

11. Sinkhorn R: A relationship between arbitrary positive matrices and doubly stochastic matrices. Ann. Math. Stat. 1964, 35: 876–879. 10.1214/aoms/1177703591

## Acknowledgements

This research is supported by National Natural Science Foundations of China (No. 11101069).

## Author information

Authors

### Corresponding author

Correspondence to Guanghui Cheng.

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

All authors conceived of the study, participated in its design and coordination, drafted the manuscript, participated in the sequence alignment, and read and approved the final manuscript.

## Rights and permissions

Reprints and Permissions

Cheng, G., Tan, Q. & Wang, Z. Some inequalities for the minimum eigenvalue of the Hadamard product of an M-matrix and its inverse. J Inequal Appl 2013, 65 (2013). https://doi.org/10.1186/1029-242X-2013-65

• Accepted:

• Published:

• DOI: https://doi.org/10.1186/1029-242X-2013-65

### Keywords 