Open Access

Some inequalities for the minimum eigenvalue of the Hadamard product of an M-matrix and its inverse

Journal of Inequalities and Applications20132013:65

DOI: 10.1186/1029-242X-2013-65

Received: 31 July 2012

Accepted: 24 January 2013

Published: 21 February 2013

Abstract

In this paper, some new inequalities for the minimum eigenvalue of the Hadamard product of an M-matrix and its inverse are given. These inequalities are sharper than the well-known results. A simple example is shown.

AMS Subject Classification:15A18, 15A42.

Keywords

Hadamard product M-matrix inverse M-matrix strictly diagonally dominant matrix eigenvalue

1 Introduction

A matrix A = ( a i j ) R n × n https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq1_HTML.gif is called a nonnegative matrix if a i j 0 https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq2_HTML.gif. A matrix A R n × n https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq3_HTML.gif is called a nonsingular M-matrix [1] if there exist B 0 https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq4_HTML.gif and s > 0 https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq5_HTML.gif such that
A = s I n B and s > ρ ( B ) , https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equa_HTML.gif
where ρ ( B ) https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq6_HTML.gif is a spectral radius of the nonnegative matrix B, I n https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq7_HTML.gif is the n × n https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq8_HTML.gif identity matrix. Denote by M n https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq9_HTML.gif the set of all n × n https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq8_HTML.gif nonsingular M-matrices. The matrices in M n 1 : = { A 1 : A M n } https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq10_HTML.gif are called inverse M-matrices. Let us denote
τ ( A ) = min { Re λ : λ σ ( A ) } , https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equb_HTML.gif
and σ ( A ) https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq11_HTML.gif denotes the spectrum of A. It is known that [2]
τ ( A ) = 1 ρ ( A 1 ) https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equc_HTML.gif
is a positive real eigenvalue of A M n https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq12_HTML.gif and the corresponding eigenvector is nonnegative. Indeed
τ ( A ) = s ρ ( B ) , https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equd_HTML.gif

if A = s I n B https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq13_HTML.gif, where s > ρ ( B ) https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq14_HTML.gif, B 0 https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq4_HTML.gif.

For any two n × n https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq8_HTML.gif matrices A = ( a i j ) https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq15_HTML.gif and B = ( b i j ) https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq16_HTML.gif, the Hadamard product of A and B is A B = ( a i j b i j ) https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq17_HTML.gif. If A , B M n https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq18_HTML.gif, then A B 1 https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq19_HTML.gif is also an M-matrix [3].

A matrix A is irreducible if there does not exist a permutation matrix P such that
P A P T = [ A 1 , 1 A 1 , 2 0 A 2 , 2 ] , https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Eque_HTML.gif

where A 1 , 1 https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq20_HTML.gif and A 2 , 2 https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq21_HTML.gif are square matrices.

For convenience, the set { 1 , 2 , , n } https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq22_HTML.gif is denoted by N, where n (≥3) is any positive integer. Let A = ( a i j ) R n × n https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq1_HTML.gif be a strictly diagonally dominant by row, denote
https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equf_HTML.gif
Recently, some lower bounds for the minimum eigenvalue of the Hadamard product of an M-matrix and an inverse M-matrix have been proposed. Let A M n https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq23_HTML.gif, for example, τ ( A A 1 ) 1 https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq24_HTML.gif has been proven by Fiedler et al. in [4]. Subsequently, τ ( A A 1 ) > 1 n https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq25_HTML.gif was given by Fiedler and Markham in [3], and they conjectured that τ ( A A 1 ) > 2 n https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq26_HTML.gif. Song [5], Yong [6] and Chen [7] have independently proven this conjecture. In [8], Li et al. improved the conjecture τ ( A A 1 ) 2 n https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq27_HTML.gif when A 1 https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq28_HTML.gif is a doubly stochastic matrix and gave the following result:
τ ( A A 1 ) min i { a i i s i R i 1 + j i s j i } . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equg_HTML.gif
In [9], Li et al. gave the following result:
τ ( A A 1 ) min i { a i i m i R i 1 + j i m j i } . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equh_HTML.gif
Furthermore, if a 11 = a 22 = = a n n https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq29_HTML.gif, they have obtained
min i { a i i m i R i 1 + j i m j i } min i { a i i s i R i 1 + j i s j i } , https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equi_HTML.gif

i.e., under this condition, the bound of [9] is better than the one of [8].

In this paper, our motives are to improve the lower bounds for the minimum eigenvalue τ ( A A 1 ) https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq30_HTML.gif. The main ideas are based on the ones of [8] and [9].

2 Some preliminaries and notations

In this section, we give some notations and lemmas which mainly focus on some inequalities for the entries of the inverse M-matrix and the strictly diagonally dominant matrix.

Lemma 2.1 [6]

Let A R n × n https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq3_HTML.gif be a strictly diagonally dominant matrix by row, i.e.,
| a i i | > j i | a i j | , i N . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equj_HTML.gif
If A 1 = ( b i j ) https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq31_HTML.gif, then
| b j i | k j | a j k | | a j j | | b i i | , j i , j N . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equk_HTML.gif
Lemma 2.2 Let A R n × n https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq3_HTML.gif be a strictly diagonally dominant M-matrix by row. If A 1 = ( b i j ) https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq31_HTML.gif, then
b j i | a j i | + k j , i | a j k | m k i a j j b i i u j b i i , j i , i N . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equl_HTML.gif
Proof Firstly, we consider A R n × n https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq3_HTML.gif is a strictly diagonally dominant M-matrix by row. For i N https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq32_HTML.gif, let
r i ( ε ) = max j i { | a j i | + ε a j j k j , i | a j k | } https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equm_HTML.gif
and
m j i ( ε ) = r i ( ε ) ( k j , i | a j k | + ε ) + | a j i | a j j , j i . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equn_HTML.gif
Since A is strictly diagonally dominant, then r j i < 1 https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq33_HTML.gif and m j i < 1 https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq34_HTML.gif. Therefore, there exists ε > 0 https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq35_HTML.gif such that 0 < r i ( ε ) < 1 https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq36_HTML.gif and 0 < m j i ( ε ) < 1 https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq37_HTML.gif. Let us define one positive diagonal matrix
M i ( ε ) = diag ( m 1 i ( ε ) , , m i 1 , i ( ε ) , 1 , m i + 1 , i ( ε ) , , m n i ( ε ) ) . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equo_HTML.gif
Similarly to the proofs of Theorem 2.1 and Theorem 2.4 in [8], we can prove that the matrix A M i ( ε ) https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq38_HTML.gif is also a strictly diagonally dominant M-matrix by row for any i N https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq32_HTML.gif. Furthermore, by Lemma 2.1, we can obtain the following result:
m j i 1 ( ε ) b j i | a j i | + k j , i | a j k | m k i ( ε ) m j i ( ε ) a j j b i i , j i , j N , https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equp_HTML.gif
i.e.,
b j i | a j i | + k j , i | a j k | m k i ( ε ) a j j b i i , j i , j N . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equq_HTML.gif
Let ε 0 + https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq39_HTML.gif to get
b j i | a j i | + k j , i | a j k | m k i a j j b i i u j b i i , j i , j N . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equr_HTML.gif

This proof is completed. □

Lemma 2.3 Let A = ( a i j ) M n https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq40_HTML.gif be a strictly diagonally dominant matrix by row and A 1 = ( b i j ) https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq41_HTML.gif, then we have
1 a i i b i i 1 a i i j i | a i j | u j i , i N . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equs_HTML.gif
Proof Let B = A 1 https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq42_HTML.gif. Since A is an M-matrix, then B 0 https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq4_HTML.gif. By A B = B A = I n https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq43_HTML.gif, we have
1 = j = 1 n a i j b j i = a i i b i i j i | a i j | b j i , i N . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equt_HTML.gif
Hence
1 a i i b i i , i N , https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equu_HTML.gif
or equivalently,
1 a i i b i i , i N . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equv_HTML.gif
Furthermore, by Lemma 2.2, we get
1 = a i i b i i j i | a i j | b j i ( a i i j i | a i j | u j i ) b i i , i N , https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equw_HTML.gif
i.e.,
b i i 1 a i i j i | a i j | u j i , i N . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equx_HTML.gif

Thus the proof is completed. □

Lemma 2.4 [10]

Let A C n × n https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq44_HTML.gif and x 1 , x 2 , , x n https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq45_HTML.gif be positive real numbers. Then all the eigenvalues of A lie in the region
1 n { z C : | z a i i | x i j i 1 x j | a j i | } . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equy_HTML.gif

Lemma 2.5 [11]

If A 1 https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq28_HTML.gif is a doubly stochastic matrix, then A e = e https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq46_HTML.gif, A T e = e https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq47_HTML.gif, where e = ( 1 , 1 , , 1 ) T https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq48_HTML.gif.

3 Main results

In this section, we give two new lower bounds for τ ( A A 1 ) https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq30_HTML.gif which improve the ones in [8] and [9].

Lemma 3.1 If A M n https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq23_HTML.gif and A 1 = ( b i j ) https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq41_HTML.gif is a doubly stochastic matrix, then
b i i 1 1 + j i u j i , i N . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equz_HTML.gif

Proof This proof is similar to the ones of Lemma 3.2 in [8] and Theorem 3.2 in [9]. □

Theorem 3.1 Let A M n https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq23_HTML.gif and A 1 = ( b i j ) https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq41_HTML.gif be a doubly stochastic matrix. Then
τ ( A A 1 ) min i { a i i u i R i 1 + j i u j i } . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equaa_HTML.gif
Proof Firstly, we assume that A is irreducible. By Lemma 2.5, we have
a i i = j i | a i j | + 1 = j i | a j i | + 1 and a i i > 1 , i N . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equab_HTML.gif
Denote
u j = max i j { u j i } = max { | a j i | + k j , i | a j k | m k i a j j } , j N . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equac_HTML.gif
Since A is an irreducible matrix, we know that 0 < u j 1 https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq49_HTML.gif. So, by Lemma 2.4, there exists i 0 N https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq50_HTML.gif such that
| λ a i 0 i 0 b i 0 i 0 | u i 0 j i 0 1 u j | a j i 0 b j i 0 | , https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equad_HTML.gif
or equivalently,
| λ | a i 0 i 0 b i 0 i 0 u i 0 j i 0 1 u j | a j i 0 b j i 0 | a i 0 i 0 b i 0 i 0 u i 0 j i 0 1 u j | a j i 0 | u j b i 0 i 0 (by Lemma 2.2) ( a i 0 i 0 u i 0 j i 0 | a j i 0 | ) b i 0 i 0 = ( a i 0 i 0 u i 0 R i 0 ) b i 0 i 0 a i 0 i 0 u i 0 R i 0 1 + j i 0 u j i 0 (by Lemma 3.1) min i { a i i u i R i 1 + j i u j i } . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equae_HTML.gif
Secondly, if A is reducible, without loss of generality, we may assume that A has the following block upper triangular form:
A = [ A 11 A 12 A 1 K 0 A 22 A 2 K 0 0 0 0 0 A K K ] , https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equaf_HTML.gif

where A i i M n i https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq51_HTML.gif is an irreducible diagonal block matrix, i = 1 , 2 , , K https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq52_HTML.gif. Obviously, τ ( A A 1 ) = min i τ ( A i i A i i 1 ) https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq53_HTML.gif. Thus the reducible case is converted into the irreducible case. This proof is completed. □

Theorem 3.2 If A = ( a i j ) M n https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq40_HTML.gif is a strictly diagonally dominant by row, then
min i { a i i u i R i 1 + j i u j i } min i { a i i s i R i 1 + j i s j i } . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equag_HTML.gif
Proof Since A is strictly diagonally dominant by row, for any j i https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq54_HTML.gif, we have
d j m j i = | a j i | + k j , i | a j k | a j j | a j i | + k j , i | a j k | r i a j j = ( 1 r i ) k j , i | a j k | a j j 0 , https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equah_HTML.gif
or equivalently,
d j m j i , j i , j N . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equ1_HTML.gif
(1)
So, we can obtain
u j i = | a j i | + k j , i | a j k | m k i a j j | a j i | + k j , i | a j k | d k a j j = s j i , j i , j N , https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equ2_HTML.gif
(2)
and
u i s i , i N . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equai_HTML.gif
Therefore, it is easy to obtain that
a i i u i R i 1 + j i u j i a i i s i R i 1 + j i s j i , i N . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equaj_HTML.gif
Obviously, we have the desired result
min i { a i i u i R i 1 + j i u j i } min i { a i i s i R i 1 + j i s j i } . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equak_HTML.gif

This proof is completed. □

Theorem 3.3 If A = ( a i j ) M n https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq40_HTML.gif is strictly diagonally dominant by row, then
min i { a i i u i R i 1 + j i u j i } min i { a i i m i R i 1 + j i m j i } . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equal_HTML.gif
Proof Since A is strictly diagonally dominant by row, for any j i https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq54_HTML.gif, we have
r i m j i = r i | a j i | + k j , i | a j k | r i a j j = r i | a j i | k j , i | a j k | a j j = r i ( a j j k j , i | a j k | ) | a j i | a j j = a j j k j , i | a j k | a j j ( r i | a j i | a j j k j , i | a j k | ) 0 , https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equam_HTML.gif
i.e.,
r i m j i , j i , j N . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equ3_HTML.gif
(3)
So, we can obtain
u j i = | a j i | + k j , i | a j k | m k i a j j | a j i | + k j , i | a j k | r i a j j = m j i , j i , j N , https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equ4_HTML.gif
(4)
and
u i m i , i N . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equan_HTML.gif
Therefore, it is easy to obtain that
a i i u i R i 1 + j i u j i a i i m i R i 1 + j i m j i , i N . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equao_HTML.gif
Obviously, we have the desired result
min i { a i i u i R i 1 + j i u j i } min i { a i i m i R i 1 + j i m j i } . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equap_HTML.gif

 □

Remark 3.1 According to inequalities (1) and (3), it is easy to know that
b j i | a j i | + k j , i | a j k | m k i a j j b i i | a j i | + k j , i | a j k | d k a j j b i i , i N . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equaq_HTML.gif
and
b j i | a j i | + k j , i | a j k | m k i a j j b i i | a j i | + k j , i | a j k | r i a j j b i i , i N . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equar_HTML.gif

That is to say, the result of Lemma 2.2 is sharper than the ones of Theorem 2.1 in [8] and Lemma 2.2 in [9]. Moreover, the results of Theorem 3.2 and Theorem 3.3 are sharper than the ones of Theorem 3.1 in [8] and Theorem 3.3 in [9], respectively.

Theorem 3.4 If A M n https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq23_HTML.gif is strictly diagonally dominant by row, then
τ ( A A 1 ) min i { 1 1 a i i j i | a j i | u j i } . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equas_HTML.gif

Proof This proof is similar to the one of Theorem 3.5 in [8]. □

Remark 3.2 According to inequalities (2) and (4), we get
1 1 a i i j i | a j i | u j i 1 1 a i i j i | a j i | s j i , https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equat_HTML.gif
and
1 1 a i i j i | a j i | u j i 1 1 a i i j i | a j i | m j i . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equau_HTML.gif

That is to say, the bound of Theorem 3.4 is sharper than the ones of Theorem 3.5 in [8] and Theorem 3.4 in [9], respectively.

Remark 3.3 Using the above similar ideas, we can obtain similar inequalities of the strictly diagonally M-matrix by column.

4 Example

For convenience, we consider the M-matrix A is the same as the matrix of [8]. Define the M-matrix A as follows:
A = [ 4 1 1 1 2 5 1 1 0 2 4 1 1 1 1 4 ] . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equav_HTML.gif
  1. 1.
    Estimate the upper bounds for entries of A 1 = ( b i j ) https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq41_HTML.gif. Firstly, by Lemma 2.2(2) in [9], we have
    A 1 [ 1 0.5833 0.5000 0.5000 0.6667 1 0.5000 0.5000 0.5000 0.6667 1 0.5000 0.5833 0.5833 0.5000 1 ] [ b 11 b 22 b 33 b 44 b 11 b 22 b 33 b 44 b 11 b 22 b 33 b 44 b 11 b 22 b 33 b 44 ] . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equaw_HTML.gif
     
By Lemma 2.2, we have
A 1 [ 1 0.5625 0.5000 0.5000 0.6167 1 0.5000 0.5000 0.4792 0.6458 1 0.5000 0.5417 0.5625 0.5000 1 ] [ b 11 b 22 b 33 b 44 b 11 b 22 b 33 b 44 b 11 b 22 b 33 b 44 b 11 b 22 b 33 b 44 ] . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equax_HTML.gif
By Lemma 2.3 and Theorem 3.1 in [9], we get
https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equay_HTML.gif
By Lemma 2.3 and Lemma 3.1, we get
https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equaz_HTML.gif
  1. 2.

    Lower bounds for τ ( A A 1 ) https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_IEq30_HTML.gif.

     
By Theorem 3.2 in [9], we obtain
0.9755 = τ ( A A 1 ) 0.8000 . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equba_HTML.gif
By Theorem 3.1, we obtain
0.9755 = τ ( A A 1 ) 0.8250 . https://static-content.springer.com/image/art%3A10.1186%2F1029-242X-2013-65/MediaObjects/13660_2012_Article_499_Equbb_HTML.gif

Declarations

Acknowledgements

This research is supported by National Natural Science Foundations of China (No. 11101069).

Authors’ Affiliations

(1)
School of Mathematical Sciences, University of Electronic Science and Technology of China

References

  1. Berman A, Plemmons RJ Classics in Applied Mathematics 9. In Nonnegative Matrices in the Mathematical Sciences. SIAM, Philadelphia; 1994.View Article
  2. Horn RA, Johnson CR: Topics in Matrix Analysis. Cambridge University Press, Cambridge; 1991.MATHView Article
  3. Fiedler M, Markham TL: An inequality for the Hadamard product of an M -matrix and inverse M -matrix. Linear Algebra Appl. 1988, 101: 1–8.MATHMathSciNetView Article
  4. Fiedler M, Johnson CR, Markham T, Neumann M: A trace inequality for M -matrices and the symmetrizability of a real matrix by a positive diagonal matrix. Linear Algebra Appl. 1985, 71: 81–94.MATHMathSciNetView Article
  5. Song YZ: On an inequality for the Hadamard product of an M -matrix and its inverse. Linear Algebra Appl. 2000, 305: 99–105. 10.1016/S0024-3795(99)00224-4MATHMathSciNetView Article
  6. Yong XR: Proof of a conjecture of Fiedler and Markham. Linear Algebra Appl. 2000, 320: 167–171. 10.1016/S0024-3795(00)00211-1MATHMathSciNetView Article
  7. Chen SC: A lower bound for the minimum eigenvalue of the Hadamard product of matrix. Linear Algebra Appl. 2004, 378: 159–166.MATHMathSciNetView Article
  8. Li HB, Huang TZ, Shen SQ, Li H: Lower bounds for the eigenvalue of Hadamard product of an M -matrix and its inverse. Linear Algebra Appl. 2007, 420: 235–247. 10.1016/j.laa.2006.07.008MATHMathSciNetView Article
  9. Li YT, Chen FB, Wang DF: New lower bounds on eigenvalue of the Hadamard product of an M -matrix and its inverse. Linear Algebra Appl. 2009, 430: 1423–1431. 10.1016/j.laa.2008.11.002MATHMathSciNetView Article
  10. Varga RS: Minimal Gerschgorin sets. Pac. J. Math. 1965, 15(2):719–729. 10.2140/pjm.1965.15.719MATHView Article
  11. Sinkhorn R: A relationship between arbitrary positive matrices and doubly stochastic matrices. Ann. Math. Stat. 1964, 35: 876–879. 10.1214/aoms/1177703591MATHMathSciNetView Article

Copyright

© Cheng et al.; licensee Springer 2013

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://​creativecommons.​org/​licenses/​by/​2.​0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.