Skip to main content

Two inequalities for the Hadamard product of matrices

Abstract

Using a estimate on the Perron root of the nonnegative matrix in terms of paths in the associated directed graph, two new upper bounds for the Hadamard product of matrices are proposed. These bounds improve some existing results and this is shown by numerical examples.

MSC 2010: 15A42; 15B34

1 Introduction

Let M n denote the set of all n × n complex matrices and N denote the set {1, 2, ..., n}. Let A = (a ij ), B = (b ij ) M n . If a ij - b ij ≥ 0, we say that AB, and if a ij ≥ 0, we say that A is nonnegative. The spectral radius of A is denoted by ρ(A). If A is a nonnegative matrix, the Perron-Frobenius theorem guarantees that ρ(A) σ(A), where σ(A) denotes the spectrum of A.

If there does not exist a permutation matrix P such that

P T A P = A 1 A 12 0 A 2 ,

where A1, A2 are square matrices, then A is called irreducible. Let A be an irreducible nonnegative matrix. It is well known that there exists a positive vector u such that Au = ρ(A)u. The Hadamard product of A, B is defined as A B = (a ij b ij ) M n . Let A M n , and let

r i ( A ) = j = 1 n | a i j | , 1 i n , R i ( A ) = j i n | a i j | , 1 i n ,

denote the absolute row sums and the deleted absolute row sums of A, respectively.

Let ς(A) represent the set of all simple circuits in the digraph Γ(A) of A. Recall that a circuit of length k in Γ(A) is an ordered sequence γ = (i1, ..., i k , ik+1), where i1, ..., i k N are all distinct, ik+1= i1. The set {i1, ..., i k } is called the support of γ and is denoted by γ ̄ . The length of the circuit is denoted by |γ|.

In [1], there is a simple estimate for ρ(A B): if A ≥ 0, B ≥ 0, then ρ(A B) ≤ ρ(A)ρ(B).

Recently, using the Gersgorin theorem that involves only elements in one row or column of the matrix, Fang [2] and Huang [3] gave new estimates for ρ(A B) that were better than the result of [1]. Using the Brauer theorem that involves elements in two rows of the matrix at a time, the authors of [4, 5] derived new upper bounds for ρ(A B) that improved the results of [2, 3]. As we all know, besides Gersgorin theorem and Brauer theorem, Brualdi theorem is also an important eigenvalue inclusion theorem and it involves more elements of the matrix than the other two theorems. In view of this, Liu [4] proposed the following problem: Could we get some new estimate better than the previous results using Brualdi theorem? In this paper, we give affirmative conclusions. Two new upper bounds for ρ(A B) are provided. These bounds improve some existing results and numerical examples illustrate that our results are superior.

2 Main results

First, we give some lemmas which are useful for obtaining the main results.

Lemma 2.1 [6] Let A M n be a nonnegative matrix. If A k is a principal submatrix of A, then ρ(A k ) ≤ ρ(A). If A is irreducible and A k A, then ρ(A k ) < ρ(A).

Lemma 2.2 [7] Let A M n be a nonnegative matrix, and let ς(A) ≠ . Then for any diagonal matrix D with positive diagonal entries, we have

min γ ς ( A ) i γ ̄ r i ( D - 1 A D ) 1 / | γ | ρ ( A ) max γ ς ( A ) i γ ̄ r i ( D - 1 A D ) 1 / | γ | .

Lemma 2.3 [4] Let A, B M n . If E; F are diagonal matrices of order n, then

E ( A B ) F = ( E A F ) B = ( E A ) ( B F ) = ( A F ) ( E B ) = A ( E B F ) .

Theorem 2.1 Let A, B M n , and A ≥ 0, B ≥ 0. Then

ρ ( A B ) min γ ς ( A B ) i γ ̄ ( 2 a i i b i i + ρ ( A ) ρ ( B ) - a i i ρ ( B ) - b i i ρ ( A ) ) 1 / | γ | .
(1)

Proof. If A B is irreducible, then A and B are irreducible. From Lemma 2.1, we have

ρ ( A ) - a i i > 0 , i N , ρ ( B ) - b i i > 0 , i N .

Since A = (a ij ), B = (b ij ) are nonnegative irreducible, there exist two positive vectors u, v such that Au = ρ(A)u, Bv = ρ(B)v. Thus, we have

a i i + j i a i j u j u i = ρ ( A ) , i N ,
(2)

and

b i i + j i b i j v j v i = ρ ( B ) , i N .
(3)

Define U = diag(u1, ..., u n ), V = diag(v1, ..., v n ). Let Â= ( Â i j ) = U - 1 AU, B ^ = ( B ^ i j ) = V - 1 BV. From (2) and (3), we have

r i ( Â ) = a i i + j i a i j u j u i = ρ ( A ) , i N .

and

r i ( B ^ ) = b i i + j i b i j v j v i = ρ ( B ) , i N .

Let D = VU. According to Lemma 2.2, for the positive diagonal matrix D, we have

ρ ( A B ) max γ ς ( A B ) i γ ̄ r i [ D - 1 ( A B ) D ] 1 / | γ | .

Using Lemma 2.3, we have

D - 1 ( A B ) D = U - 1 V - 1 ( A B ) V U = U - 1 ( A ( V - 1 B V ) ) U = ( U - 1 A U ) ( V - 1 B V ) = Â B ^ .

Then,

r i [ D - 1 ( A B ) D ] = r i (  B ^ ) = a i i b i i + j i  i j B ^ i j a i i b i i + j i  i j j i B ^ i j = a i i b i i + ( ρ ( A ) - a i i ) ( ρ ( B ) - b i i ) .

So, we have

ρ ( A B ) max γ ς ( A B ) i γ ̄ ( 2 a i i b i i + ρ ( A ) ρ ( B ) - a i i ρ ( B ) - b i i ρ ( A ) ) 1 / | γ | .

If A B is reducible, then one of A and B is reducible. If we denote by P = (p ij ) the n × n permutation matrix with p12 = p23 = · · · = pn 1= 1, the remaining p ij = 0, then both A + tP and B + tP are nonnegative irreducible matrices for any chosen positive real numbers t. Now, we substitute A + tP and B+tP for A and B, respectively in the previous case, and then letting t → 0, the result follows by continuity.

Two bounds for ρ(A B) given in [2] and [4], respectively, are

ρ ( A B ) max 1 i n { 2 a i i b i i + ρ ( A ) ρ ( B ) - a i i ρ ( B ) - b i i ρ ( A ) } ,
(4)

and

ρ ( A B ) max i j 1 2 { a i i b i i + a j j b j j + [ { [ ( a i i b i i a j j b j j ) 2 + 4 ( ρ ( A ) a i i ) ( ρ ( B ) b i i ) ( ρ ( A ) a j j ) ( ρ ( B ) b j j ) ] 1 2 } .
(5)

Next, we give a simple comparison between (1) and (4). It is easy to see

ρ ( A B ) max γ ς ( A B ) i γ ̄ ( 2 a i i b i i + ρ ( A ) ρ ( B ) - a i i ρ ( B ) - b i i ρ ( A ) ) 1 / | γ | max γ ς ( A B ) ( max i N { 2 a i i b i i + ρ ( A ) ρ ( B ) - a i i ρ ( B ) - b i i ρ ( A ) } ) | γ | 1 / | γ | = max i N { 2 a i i b i i + ρ ( A ) ρ ( B ) - a i i ρ ( B ) - b i i ρ ( A ) } .

Then the bound (1) is better than the bound (4). From the difference between (1) and (5), we could not verify that (1) is better than (5) in theoretical analysis, but the following numerical example shows that the result derived in Theorem 2.1 is better than (4) and (5).

Example 2.1. Consider two 4 × 4 nonnegative matrices

A = 4 1 0 2 0 0 . 05 1 1 0 0 4 0 . 5 1 0 . 5 0 4 , B = 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 .

It is easy to calculate that ρ(A B) = ρ(A) = 5.4983. By inequalities (4) and (5), we have

ρ ( A B ) max 1 i 4 { 2 a i i b i i + ρ ( A ) ρ ( B ) - a i i ρ ( B ) - b i i ρ ( A ) } = 16 . 3949 ,

and ρ(A B) = 11.6478, and by Theorem 2.1, we get

ρ ( A B ) max γ ς ( A B ) i γ ̄ ( 2 a i i b i i + ρ ( A ) ρ ( B ) - a i i ρ ( B ) - b i i ρ ( A ) ) 1 / | γ | = 10 . 0126 .

Next, we will give the second inequality for ρ(A B). For A ≥ 0, write L = A - D, where D = diag(a11, ..., a nn ). with We denote J A = D 1 - 1 L with D1 = diag(d ii ), where

d i i = a i i , if a i i 0 , 1 , if a i i = 0 .

Then, J A is nonnegative, and J A = A if a ii = 0 for all i. For B ≥ 0, let D2 = diag(s ii ), with

s i i = b i i , if b i i 0 , 1 , if b i i = 0 .

Then the nonnegative matrix J B can be similarly defined.

Theorem 2.2 Let A, B M n , and A ≥ 0, B ≥ 0. Then

ρ ( A B ) max γ ς ( A B ) [ i γ ¯ ( a i i b i i + d i i ρ ( J A ) s i i ρ ( J B ) ] 1 / | γ | .
(6)

Proof. If A B is nonnegative irreducible, then A and B are irreducible. Since J A and J B are also nonnegative irreducible, there exist two positive vectors x, y such that J A x = ρ(J A )x, J B y = ρ(J B )y. So, we have

j i a i j x j x i = d i i ρ ( J A ) , j i b i j y j y i = s i i ρ ( J B ) .

Let Ã= ( Ã i j ) = Ũ - 1 AŨ, and B ̃ = ( b ̃ i j ) = - 1 B in which and are nonsingular diagonal matrices Ũ=diag ( x 1 , , x n ) and =diag ( y 1 , , y n ) .

From Lemma 2.3, we have

( Ũ ) - 1 ( A B ) ( Ũ ) = ( Ũ - 1 A Ũ ) ( - 1 B ) = Ã B ̃ ,

and then

r i ( à B ̃ ) = a i i b i i + j i à i j b ̃ i j a i i b i i + j i a i j x j x i j i b i j y j y i = a i i b i i + d i i ρ ( J A ) s i i ρ ( J B ) .

Let W=Ũ. Then for the positive diagonal matrix W, it follows from Lemma 2.2 that

ρ ( A B ) max γ ς ( A B ) [ i γ ¯ r i [ W 1 ( A B ) W ] ] 1 / | γ | = max γ ς ( A B ) [ i γ ¯ r i [ A ˜ B ˜ ] ] 1 / | γ | max γ ς ( A B ) [ i γ ¯ ( a i i b i i + d i i ρ ( J A ) s i i ρ ( J B ) ] 1 / | γ | .

If A B is reducible, then substituting A + tP and B + tP for A and B, respectively in the previous case, letting t → 0, the result is derived.

The bounds for ρ(A B) obtained in [3] and [5], respectively, are

ρ ( A B ) max 1 i n { a i i b i i + d i i ρ ( J A ) s i i ρ ( J B ) } ,
(7)

and

ρ ( A B ) max i j 1 2 a i i b i i + a j j b j j + ( a i i b i i - a j j b j j ) 2 + 4 d i i s i i d j j s j j ρ 2 ( J A ) ρ 2 ( J B ) 1 2 .
(8)

It can be easily verified that the bound (6) is better than the bound (7). Here too, we could not give the comparison between (6) and (8), but the following example shows that the result obtained in Theorem 2.2 is better than (7) and (8).

Example 2.2. Let

A = 2 0 1 1 1 4 0 . 5 0 . 5 1 0 3 0 . 5 0 . 5 1 1 2 , B = 2 0 . 5 0 . 5 0 . 5 1 1 1 1 0 . 5 0 2 0 . 5 0 1 1 2 .

Then

A B = 4 0 0 . 5 0 . 5 1 4 0 . 5 0 . 5 0 . 5 0 6 0 . 25 0 1 1 4 .

It is clear that ρ(J A ) = 0.8182, ρ(J B ) = 1.1258, and ρ(A B) = 6.3365. By (7) and (8), we have

ρ ( A B ) max 1 i 4 { a i i b i i + d i i ρ ( J A ) s i i ρ ( J B ) } = 11 . 5266 ,

and ρ(A B) = 9.6221, and by Theorem 2.2, we get

ρ ( A B ) max γ ς ( A B ) [ i γ ¯ ( a i i b i i + d i i ρ ( J A ) s i i ρ ( J B ) ] 1 / | γ | = 9.4116.

3 Conclusions

In this paper, we propose two new upper bounds for the Hadamard product of matrices. These bounds are better than the results of [2, 3] and numerical examples illustrate that our results are superior than the previous results of [25].

References

  1. Horn RA, Johnson CR: Topics in Matrix Analysis. Cambridge University Press, Cambridge; 1985.

    Chapter  Google Scholar 

  2. Fang MZ: Bounds on eigenvalues of the Hadamard product and the Fan product of matrices. Linear Algebr Appl 2007, 425: 7–15. 10.1016/j.laa.2007.03.024

    Article  Google Scholar 

  3. Huang R: Some inequalities for the Hadamard product and the Fan product of matrices. Linear Algebr Appl 2008, 428: 1551–1559. 10.1016/j.laa.2007.10.001

    Article  Google Scholar 

  4. Liu QB, Chen GL: On two inequalities for the Hadamard product and the Fan product of matrices. Linear Algebr Appl 2009, 431: 974–984. 10.1016/j.laa.2009.03.049

    Article  Google Scholar 

  5. Liu QB, Chen GL, Zhao LL: Some new bounds on the spectral radius of matrices. Linear Algebr Appl 2010, 432: 936–948. 10.1016/j.laa.2009.10.006

    Article  MathSciNet  Google Scholar 

  6. Berman A, Plemmons RJ: Nonnegative Matrices in the Mathematical Sciences. SIAM, Philadelphia; 1994.

    Chapter  Google Scholar 

  7. Kolotilina LY: Bounds for the Perron root, singularity/nonsingularity conditions, and Eigenvalue inclusion sets. Numer Algorithm 2006, 42: 247–280. 10.1007/s11075-006-9041-7

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The author wishes to thank Prof. Guoliang Chen and Dr. Qingbin Liu for their help. This research is financed by NSFC(10971070,11071079).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Linlin Zhao.

Additional information

Competing interests

The author declares that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Zhao, L. Two inequalities for the Hadamard product of matrices. J Inequal Appl 2012, 122 (2012). https://doi.org/10.1186/1029-242X-2012-122

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2012-122

Keywords