Open Access

Some new inequalities for the Hadamard product of M-matrices

Journal of Inequalities and Applications20132013:581

https://doi.org/10.1186/1029-242X-2013-581

Received: 13 July 2013

Accepted: 20 November 2013

Published: 11 December 2013

Abstract

If A and B are n × n nonsingular M-matrices, a new lower bound for the minimum eigenvalue τ ( B A 1 ) for the Hadamard product of B and A 1 is derived. As a consequence, a new lower bound for the minimum eigenvalue τ ( A A 1 ) for the Hadamard product of A and its inverse A 1 is given. Theoretical results and an example demonstrate that the new bounds are better than some existing ones.

MSC:15A06, 15A18, 15A48.

Keywords

M-matrixlower boundsHadamard productminimum eigenvalue

1 Introduction

For convenience, for any positive integer n, let N = { 1 , 2 , , n } throughout. The set of all n × n real matrices is denoted by R n × n and C n × n denotes the set of all n × n complex matrices.

A matrix A = ( a i j ) R n × n is called a nonnegative matrix if a i j 0 . The spectral radius of A is denoted by ρ ( A ) . If A is a nonnegative matrix, the Perron-Frobenius theorem guarantees that ρ ( A ) is an eigenvalue of A.

Z n denotes the class of all n × n real matrices all of whose off-diagonal entries are nonpositive. An n × n matrix A is called an M-matrix if there exists an n × n nonnegative matrix B and a nonnegative real number λ such that A = λ I B and λ ρ ( B ) , I is the identity matrix; if λ > ρ ( B ) , we call A a nonsingular M-matrix; if λ = ρ ( B ) , we call A a singular M-matrix. Denote by M n the set of nonsingular M-matrices.

Let A Z n , and let τ ( A ) = min { Re ( λ ) : λ σ ( A ) } . Basic for our purpose are the following simple facts (see Problems 16, 19 and 28 in Section 2.5 of [1]):
  1. (1)

    τ ( A ) σ ( A ) ; τ ( A ) is called the minimum eigenvalue of A.

     
  2. (2)

    If A , B M n , and A B , then τ ( A ) τ ( B ) .

     
  3. (3)

    If A M n , then ρ ( A 1 ) is the Perron eigenvalue of the nonnegative matrix A 1 , and τ ( A ) = 1 ρ ( A 1 ) is a positive real eigenvalue of A.

     

For two matrices A = ( a i j ) and B = ( b i j ) , the Hadamard product of A and B is the matrix A B = ( a i j b i j ) . If A and B are two nonsingular M-matrices, then B A 1 is also a nonsingular M-matrix [2].

Let A , B M n and A 1 = ( β i j ) , in [[1], Theorem 5.7.31] the following classical result is given:
τ ( B A 1 ) τ ( B ) min 1 i n β i i .
(1.1)
Huang [[3], Theorem 9] improved this result and obtained the following result:
τ ( B A 1 ) 1 ρ ( J A ) ρ ( J B ) 1 + ρ 2 ( J A ) min 1 i n b i i a i i ,
(1.2)

where ρ ( J A ) , ρ ( J B ) are the spectral radii of J A and J B .

The lower bound (1.1) is simple, but not accurate enough. The lower bound (1.2) is difficult to evaluate.

Recently, Li [[4], Theorem 2.1] improved these two results and gave a new lower bound for τ ( B A 1 ) , that is,
τ ( B A 1 ) min i { b i i s i k i | b k i | a i i } ,
(1.3)

where r l i = | a l i | | a l l | k l , i | a l k | , l i ; r i = max l i { r l i } , i N ; s j i = | a j i | + k j , i | a j k | r k a j j , j i , j N ; s i = max j i { s i j } , i N .

For an M-matrix A, Fiedler et al. showed in [5] that τ ( A A 1 ) 1 . Subsequently, Fiedler and Markham [[2], Theorem 3] gave a lower bound on τ ( A A 1 ) ,
τ ( A A 1 ) 1 n ,
(1.4)
and proposed the following conjecture:
τ ( A A 1 ) 2 n .
(1.5)

Yong [6] and Song [7] have independently proved this conjecture.

Li [[8], Theorem 3.1] obtained the following result:
τ ( A A 1 ) min i { a i i t i R i 1 + j i t j i } ,
(1.6)

which only depends on the entries of A = ( a i j ) , where R i = k i | a i k | ; d i = R i | a i i | , i N ; t j i = | a j i | + k j , i | a j k | d k | a j j | , j i , j N ; t i = max j i { t i j } , i N .

Li [[9], Theorem 3.2] improved the bound (1.6) and obtained the following result:
τ ( A A 1 ) min i { a i i m i R i 1 + j i m j i } ,
(1.7)

where r l i = | a l i | | a l l | k l , i | a l k | , l i ; r i = max l i { r l i } , i N ; m j i = | a j i | + k j , i | a j k | r i | a j j | , j i , j N ; m i = max j i { m i j } , i N .

Recently, Li [[10], Theorem 3.2] improved the bound (1.7) and gave a new lower bound for τ ( A A 1 ) , that is,
τ ( A A 1 ) min i { a i i T i R i 1 + j i T j i } ,
(1.8)

where T j i = min { m j i , s j i } , j i ; T i = max j i { T i j } , i N .

In the present paper, we present a new lower bound on τ ( B A 1 ) . As a consequence, we present a new lower bound on τ ( A A 1 ) . These bounds improve several existing results.

The following is the list of notations that we use throughout: For i , j , k , l N ,
R i = k i | a i k | , C i = k i | a k i | , d i = R i | a i i | , c ˆ i = C i | a i i | ; r l i = | a l i | | a l l | k l , i | a l k | , l i ; r i = max l i { r l i } , i N ; c i l = | a i l | | a l l | k l , i | a k l | , l i ; c i = max l i { c i l } , i N ; m j i = | a j i | + k j , i | a j k | r i | a j j | , j i ; m i = max j i { m i j } , i N ; s j i = | a j i | + k j , i | a j k | r k | a j j | , j i ; s i = max j i { s i j } , i N ; T j i = min { m j i , s j i } , j i ; T i = max j i { T i j } , i N .

2 Some lemmas and the main results

In order to prove our results, we first give some lemmas.

Lemma 2.1 [11]

If A = ( a i j ) R n × n is an M-matrix, then there exists a diagonal matrix D with positive diagonal entries such that D 1 A D is a strictly row diagonally dominant M-matrix.

Lemma 2.2 [1]

Let A , B = ( a i j ) C n × n and suppose that D C n × n and E C n × n are diagonal matrices. Then
D ( A B ) E = ( D A E ) B = ( D A ) ( B E ) = ( A E ) ( D B ) = A ( D B E ) .

Lemma 2.3 [10]

If A = ( a i j ) R n × n is a strictly row diagonally dominant M-matrix, then A 1 = ( β i j ) satisfies
β j i T j i β i i , i , j N , i j .

Lemma 2.4 [12]

If A 1 is a doubly stochastic matrix, then A e = e , A T e = e , where e = ( 1 , 1 , , 1 ) T .

Lemma 2.5 [9]

Let A = ( a i j ) R n × n be a strictly row diagonally dominant M-matrix. Then, for A 1 = ( β i j ) , we have
β i i 1 a i i , i N .

Lemma 2.6 [10]

If A = ( a i j ) R n × n is an M-matrix and A 1 = ( β i j ) is a doubly stochastic matrix, then
β i i 1 1 + j i T j i , i N .

Lemma 2.7 [13]

Let A = ( a i j ) C n × n , and let x 1 , x 2 , , x n be positive real numbers. Then all the eigenvalues of A lie in the region
i , j = 1 i j n { z C : | z a i i | | z a j j | ( x i k i 1 x k | a k i | ) ( x j k j 1 x k | a k j | ) } .
Theorem 2.1 Let A , B = ( b i j ) R n × n be two nonsingular M-matrices, and let A 1 = ( β i j ) . Then
τ ( B A 1 ) min i j 1 2 { b i i β i i + b j j β j j [ ( b i i β i i b j j β j j ) 2 + 4 ( T i k i | b k i | β i i ) ( T j k j | b k j | β j j ) ] 1 2 } .
(2.1)

Proof It is evident that (2.1) is an equality for n = 1 .

We next assume that n 2 .

If A is an M-matrix, then by Lemma 2.1 we know that there exists a diagonal matrix D with positive diagonal entries such that D 1 A D is a strictly row diagonally dominant M-matrix and satisfies
τ ( B A 1 ) = τ ( D 1 ( B A 1 ) D ) = τ ( B ( D 1 A D ) 1 ) .

So, for convenience and without loss of generality, we assume that A is a strictly row diagonally dominant M-matrix. Therefore, 0 < T i < 1 , i N .

If B A 1 is irreducible, then B and A are irreducible. Let τ ( B A 1 ) = λ , so that 0 < λ < b i i β i i , i N . Thus, by Lemma 2.7, there is a pair ( i , j ) of positive integers with i j such that
| λ b i i β i i | | λ b j j β j j | ( T i k i 1 T k | b k i β k i | ) ( T j k j 1 T k | b k j β k j | ) .
Observe that
( T i k i 1 T k | b k i β k i | ) ( T j k j 1 T k | b k j β k j | ) ( T i k i 1 T k | b k i | T k i β i i ) ( T j k j 1 T k | b k j | T k j β j j ) ( T i k i | b k i | β i i ) ( T j k j | b k j | β j j ) .
Thus, we have
| λ b i i β i i | | λ b j j β j j | ( T i k i | b k i | β i i ) ( T j k j | b k j | β j j ) .
Then we have
λ 1 2 { b i i β i i + b j j β j j [ ( b i i β i i b j j β j j ) 2 + 4 ( T i k i | b k i | β i i ) ( T j k j | b k j | β j j ) ] 1 2 } .
That is,
τ ( B A 1 ) 1 2 { b i i β i i + b j j β j j [ ( b i i β i i b j j β j j ) 2 + 4 ( T i k i | b k i | β i i ) ( T j k j | b k j | β j j ) ] 1 2 } min i j 1 2 { b i i β i i + b j j β j j [ ( b i i β i i b j j β j j ) 2 + 4 ( T i k i | b k i | β i i ) ( T j k j | b k j | β j j ) ] 1 2 } .

Now, assume that B A 1 is reducible. It is known that a matrix in Z n is a nonsingular M-matrix if and only if all its leading principal minors are positive (see condition (E17) of Theorem 6.2.3 of [14]). If we denote by D = ( d i j ) the n × n permutation matrix with d 12 = d 23 = = d n 1 , n = d n 1 = 1 , then both A t D and B t D are irreducible nonsingular M-matrices for any chosen positive real number t, sufficiently small such that all the leading principal minors of both A t D and B t D are positive. Now we substitute A t D and B t D for A and B, respectively in the previous case, and then letting t 0 , the result follows by continuity. □

Theorem 2.2 Let A , B = ( b i j ) R n × n be two nonsingular M-matrices, and let A 1 = ( β i j ) . Then
min i j 1 2 { b i i β i i + b j j β j j [ ( b i i β i i b j j β j j ) 2 + 4 ( T i k i | b k i | β i i ) ( T j k j | b k j | β j j ) ] 1 2 } min 1 i n { b i i s i k i | b k i | a i i } .
Proof Since T j i = min { m j i , s j i } , j i , T i = max j i { T i j } , so T i s i , i N . Without loss of generality, for i j , assume that
b i i β i i T i k i | b k i | β i i b j j β j j T j k j | b k j | β j j .
(2.2)
Thus, (2.2) is equivalent to
T j k j | b k j | β j j T i k i | b k i | β i i + b j j β j j b i i β i i .
(2.3)
From (2.1) and (2.3), we have
1 2 { b i i β i i + b j j β j j [ ( b i i β i i b j j β j j ) 2 + 4 ( T i k i | b k i | β i i ) ( T j k j | b k j | β j j ) ] 1 2 } 1 2 { b i i β i i + b j j β j j [ ( b i i β i i b j j β j j ) 2 + 4 ( T i k i | b k i | β i i ) ( T i k i | b k i | β i i + b j j β j j b i i β i i ) ] 1 2 } = 1 2 { b i i β i i + b j j β j j [ ( b i i β i i b j j β j j ) 2 + 4 ( T i k i | b k i | β i i ) 2 + 4 ( T i k i | b k i | β i i ) ( b j j β j j b i i β i i ) ] 1 2 } = 1 2 { b i i β i i + b j j β j j [ ( b j j β j j b i i β i i + 2 T i k i | b k i | β i i ) 2 ] 1 2 } = 1 2 { b i i β i i + b j j β j j ( b j j β j j b i i β i i + 2 T i k i | b k i | β i i ) } = b i i β i i T i k i | b k i | β i i = β i i ( b i i T i k i | b k i | ) β i i ( b i i s i k i | b k i | ) b i i s i k i | b k i | a i i .
Thus, we have
τ ( B A 1 ) min i j 1 2 { b i i β i i + b j j β j j [ ( b i i β i i b j j β j j ) 2 + 4 ( T i k i | b k i | β i i ) ( T j k j | b k j | β j j ) ] 1 2 } min 1 i n { b i i s i k i | b k i | a i i } .

This proof is completed. □

Remark 2.1 Theorem 2.2 shows that the result of Theorem 2.1 is better than the result of Theorem 2.1 in [4].

If A = B , according to Theorem 2.1, we can obtain the following corollary.

Corollary 2.1 Let A = ( a i j ) R n × n be an M-matrix, and let A 1 = ( β i j ) be a doubly stochastic matrix. Then
τ ( A A 1 ) min i j 1 2 { a i i β i i + a j j β j j [ ( a i i β i i a j j β j j ) 2 + 4 ( T i k i | a k i | β i i ) ( T j k j | a k j | β j j ) ] 1 2 } .
(2.4)
Theorem 2.3 Let A = ( a i j ) R n × n be an M-matrix, and let A 1 = ( β i j ) be a doubly stochastic matrix. Then
min i j 1 2 { a i i β i i + a j j β j j [ ( a i i β i i a j j β j j ) 2 + 4 ( T i k i | a k i | β i i ) ( T j k j | a k j | β j j ) ] 1 2 } min i { a i i T i R i 1 + j i T j i } .
Proof Since A is an irreducible M-matrix and A 1 is a doubly stochastic matrix by Lemma 2.4, we have
a i i = k i | a i k | + 1 = k i | a k i | + 1 , i N .
Without loss of generality, for i j , assume that
a i i β i i T i k i | a k i | β i i a j j β j j T j k j | a k j | β j j .
(2.5)
Thus, (2.5) is equivalent to
T j k j | a k j | β j j a j j β j j a i i β i i + T i k i | a k i | β i i .
(2.6)
From (2.4) and (2.6), we have
1 2 { a i i β i i + a j j β j j [ ( a i i β i i a j j β j j ) 2 + 4 ( T i k i | a k i | β i i ) ( T j k j | a k j | β j j ) ] 1 2 } 1 2 { a i i β i i + a j j β j j [ ( a i i β i i a j j β j j ) 2 + 4 ( T i k i | a k i | β i i ) ( a j j β j j a i i β i i + T i k i | a k i | β i i ) ] 1 2 } = 1 2 { a i i β i i + a j j β j j [ ( a i i β i i a j j β j j ) 2 + 4 ( T i k i | a k i | β i i ) 2 + 4 ( T i k i | a k i | β i i ) ( a j j β j j a i i β i i ) ] 1 2 } = 1 2 { a i i β i i + a j j β j j [ ( a j j β j j a i i β i i + 2 T i k i | a k i | β i i ) 2 ] 1 2 } = 1 2 { a i i β i i + a j j β j j ( a j j β j j a i i β i i + 2 T i k i | a k i | β i i ) } = a i i β i i T i k i | a k i | β i i = β i i ( a i i T i k i | a k i | ) a i i T i R i 1 + j i T j i .
Thus, we have
τ ( A A 1 ) min i j 1 2 { a i i β i i + a j j β j j [ ( a i i β i i a j j β j j ) 2 + 4 ( T i k i | a k i | β i i ) ( T j k j | a k j | β j j ) ] 1 2 } min i { a i i T i R i 1 + j i T j i } .

This proof is completed. □

Remark 2.2 Theorem 2.3 shows that the result of Corollary 2.1 is better than the result of Theorem 3.2 in [10].

3 Example

For convenience, we consider that the M-matrices A and B are the same as the matrices of [4].
A = [ 4 1 1 1 2 5 1 1 0 2 4 1 1 1 1 4 ] , B = [ 1 0.5 0 0 0.5 1 0.5 0 0 0.5 1 0.5 0 0 0.5 1 ] .
  1. (1)

    We consider the lower bound for τ ( B A 1 ) .

     
If we apply (1.1), we have
τ ( B A 1 ) τ ( B ) min 1 i n β i i = 0.07 .
If we apply (1.2), we have
τ ( B A 1 ) 1 ρ ( J A ) ρ ( J B ) 1 + ρ 2 ( J A ) min 1 i n b i i a i i = 0.048 .
If we apply (1.3), we have
τ ( B A 1 ) min i { b i i s i k i | b k i | a i i } = 0.08 .
If we apply Theorem 2.1, we have
τ ( B A 1 ) min i j 1 2 { b i i β i i + b j j β j j [ ( b i i β i i b j j β j j ) 2 + 4 ( T i k i | b k i | β i i ) ( T j k j | b k j | β j j ) ] 1 2 } = 0.1753 .
In fact, τ ( B A 1 ) = 0.2148 .
  1. (2)

    We consider the lower bound for τ ( A A 1 ) .

     
If we apply (1.5), we have
τ ( A A 1 ) 2 n = 1 2 = 0.5 .
If we apply (1.6), we have
τ ( A A 1 ) min i { a i i t i R i 1 + j i t j i } = 0.6624 .
If we apply (1.7), we have
τ ( A A 1 ) min i { a i i m i R i 1 + j i m j i } = 0.7999 .
If we apply (1.8), we have
τ ( A A 1 ) min i { a i i T i R i 1 + j i T j i } = 0.85 .
If we apply Corollary 2.1, we have
τ ( A A 1 ) min i j 1 2 { a i i β i i + a j j β j j [ ( a i i β i i a j j β j j ) 2 + 4 ( T i k i | a k i | β i i ) ( T j k j | a k j | β j j ) ] 1 2 } = 0.9755 .

In fact, τ ( A A 1 ) = 0.9755 .

Remark 3.1 The numerical example shows that the bounds of Theorem 2.1 and Corollary 2.1 are sharper than those of Theorem 2.1 in [4] and Theorem 3.2 in [10].

Declarations

Acknowledgements

This work is supported by the Natural Science Foundation of China (No: 71161020).

Authors’ Affiliations

(1)
Department of Engineering, Oxbridge College, Kunming University of Science and Technology

References

  1. Horn RA, Johnson CR: Topics in Matrix Analysis. Cambridge University Press, Cambridge; 1991.View ArticleMATHGoogle Scholar
  2. Fiedler M, Markham TL: An inequality for the Hadamard product of an M -matrix and inverse M -matrix. Linear Algebra Appl. 1988, 101: 1–8.MathSciNetView ArticleMATHGoogle Scholar
  3. Huang R: Some inequalities for the Hadamard product and the Fan product of matrices. Linear Algebra Appl. 2008, 428: 1551–1559. 10.1016/j.laa.2007.10.001MathSciNetView ArticleMATHGoogle Scholar
  4. Li YT, Li YY, Wang RW, Wang YQ: Some new bounds on eigenvalues of the Hadamard product and the Fan product of matrices. Linear Algebra Appl. 2010, 432: 536–545. 10.1016/j.laa.2009.08.036MathSciNetView ArticleMATHGoogle Scholar
  5. Fiedler M, Johnson CR, Markham TL, Neumann M: A trace inequality for M -matrix and the symmetrizability of a real matrix by a positive diagonal matrix. Linear Algebra Appl. 1985, 71: 81–94.MathSciNetView ArticleMATHGoogle Scholar
  6. Yong XR: Proof of a conjecture of Fiedler and Markham. Linear Algebra Appl. 2000, 320: 167–171. 10.1016/S0024-3795(00)00211-1MathSciNetView ArticleMATHGoogle Scholar
  7. Song YZ: On an inequality for the Hadamard product of an M -matrix and its inverse. Linear Algebra Appl. 2000, 305: 99–105. 10.1016/S0024-3795(99)00224-4MathSciNetView ArticleMATHGoogle Scholar
  8. Li HB, Huang TZ, Shen SQ, Li H: Lower bounds for the minimum eigenvalue of Hadamard product of an M -matrix and its inverse. Linear Algebra Appl. 2007, 420: 235–247. 10.1016/j.laa.2006.07.008MathSciNetView ArticleMATHGoogle Scholar
  9. Li YT, Chen FB, Wang DF: New lower bounds on eigenvalue of the Hadamard product of an M -matrix and its inverse. Linear Algebra Appl. 2009, 430: 1423–1431. 10.1016/j.laa.2008.11.002MathSciNetView ArticleMATHGoogle Scholar
  10. Li YT, Liu X, Yang XY, Li CQ: Some new lower bounds for the minimum eigenvalue of the Hadamard product of an M -matrix and its inverse. Electron. J. Linear Algebra 2011, 22: 630–643.MathSciNetGoogle Scholar
  11. Chen JL: Special Matrices. Qing Hua University Press, Beijing; 2000.Google Scholar
  12. Yong XR, Wang Z: On a conjecture of Fiedler and Markham. Linear Algebra Appl. 1999, 288: 259–267.MathSciNetView ArticleMATHGoogle Scholar
  13. Horn RA, Johnson CR: Matrix Analysis. Cambridge University Press, Cambridge; 1985.View ArticleMATHGoogle Scholar
  14. Berman A, Plemmons RJ: Nonnegative Matrices in the Mathematical Sciences. SIAM, Philadelphia; 1979.MATHGoogle Scholar

Copyright

© Chen; licensee Springer. 2013

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.