• Research
• Open Access

Two inequalities for the Hadamard product of matrices

Journal of Inequalities and Applications20122012:122

https://doi.org/10.1186/1029-242X-2012-122

• Accepted: 30 May 2012
• Published:

Abstract

Using a estimate on the Perron root of the nonnegative matrix in terms of paths in the associated directed graph, two new upper bounds for the Hadamard product of matrices are proposed. These bounds improve some existing results and this is shown by numerical examples.

MSC 2010: 15A42; 15B34

Keywords

• irreducible matrix

1 Introduction

Let M n denote the set of all n × n complex matrices and N denote the set {1, 2, ..., n}. Let A = (a ij ), B = (b ij ) M n . If a ij - b ij ≥ 0, we say that AB, and if a ij ≥ 0, we say that A is nonnegative. The spectral radius of A is denoted by ρ(A). If A is a nonnegative matrix, the Perron-Frobenius theorem guarantees that ρ(A) σ(A), where σ(A) denotes the spectrum of A.

If there does not exist a permutation matrix P such that
${P}^{T}AP=\left(\begin{array}{cc}\hfill {A}_{1}\hfill & \hfill {A}_{12}\hfill \\ \hfill 0\hfill & \hfill {A}_{2}\hfill \end{array}\right),$
where A1, A2 are square matrices, then A is called irreducible. Let A be an irreducible nonnegative matrix. It is well known that there exists a positive vector u such that Au = ρ(A)u. The Hadamard product of A, B is defined as A B = (a ij b ij ) M n . Let A M n , and let
${r}_{i}\left(A\right)=\sum _{j=1}^{n}|{a}_{ij}|,\phantom{\rule{1em}{0ex}}1\le i\le n,\phantom{\rule{1em}{0ex}}{R}_{i}\left(A\right)=\sum _{j\ne i}^{n}|{a}_{ij}|,\phantom{\rule{1em}{0ex}}1\le i\le n,$

denote the absolute row sums and the deleted absolute row sums of A, respectively.

Let ς(A) represent the set of all simple circuits in the digraph Γ(A) of A. Recall that a circuit of length k in Γ(A) is an ordered sequence γ = (i1, ..., i k , ik+1), where i1, ..., i k N are all distinct, ik+1= i1. The set {i1, ..., i k } is called the support of γ and is denoted by $\stackrel{̄}{\gamma }$. The length of the circuit is denoted by |γ|.

In , there is a simple estimate for ρ(A B): if A ≥ 0, B ≥ 0, then ρ(A B) ≤ ρ(A)ρ(B).

Recently, using the Gersgorin theorem that involves only elements in one row or column of the matrix, Fang  and Huang  gave new estimates for ρ(A B) that were better than the result of . Using the Brauer theorem that involves elements in two rows of the matrix at a time, the authors of [4, 5] derived new upper bounds for ρ(A B) that improved the results of [2, 3]. As we all know, besides Gersgorin theorem and Brauer theorem, Brualdi theorem is also an important eigenvalue inclusion theorem and it involves more elements of the matrix than the other two theorems. In view of this, Liu  proposed the following problem: Could we get some new estimate better than the previous results using Brualdi theorem? In this paper, we give affirmative conclusions. Two new upper bounds for ρ(A B) are provided. These bounds improve some existing results and numerical examples illustrate that our results are superior.

2 Main results

First, we give some lemmas which are useful for obtaining the main results.

Lemma 2.1  Let A M n be a nonnegative matrix. If A k is a principal submatrix of A, then ρ(A k ) ≤ ρ(A). If A is irreducible and A k A, then ρ(A k ) < ρ(A).

Lemma 2.2  Let A M n be a nonnegative matrix, and let ς(A) ≠ . Then for any diagonal matrix D with positive diagonal entries, we have
$\underset{\gamma \in \varsigma \left(A\right)}{\text{min}}{\left[\prod _{i\in \stackrel{̄}{\gamma }}{r}_{i}\left({D}^{-1}AD\right)\right]}^{1/|\gamma |}\le \rho \left(A\right)\le \underset{\gamma \in \varsigma \left(A\right)}{\text{max}}{\left[\prod _{i\in \stackrel{̄}{\gamma }}{r}_{i}\left({D}^{-1}AD\right)\right]}^{1/|\gamma |}.$
Lemma 2.3  Let A, B M n . If E; F are diagonal matrices of order n, then
$E\left(A\circ B\right)F=\left(EAF\right)\circ B=\left(EA\right)\circ \left(BF\right)=\left(AF\right)\circ \left(EB\right)=A\circ \left(EBF\right).$
Theorem 2.1 Let A, B M n , and A ≥ 0, B ≥ 0. Then
$\rho \left(A\circ B\right)\le \underset{\gamma \in \varsigma \left(A\circ B\right)}{\text{min}}{\left[\prod _{i\in \stackrel{̄}{\gamma }}\left(2{a}_{ii}{b}_{ii}+\rho \left(A\right)\rho \left(B\right)-{a}_{ii}\rho \left(B\right)-{b}_{ii}\rho \left(A\right)\right)\right]}^{1/|\gamma |}.$
(1)
Proof. If A B is irreducible, then A and B are irreducible. From Lemma 2.1, we have
$\begin{array}{l}\rho \left(A\right)-{a}_{ii}>0,\phantom{\rule{1em}{0ex}}\forall i\in N,\phantom{\rule{2em}{0ex}}\\ \rho \left(B\right)-{b}_{ii}>0,\phantom{\rule{1em}{0ex}}\forall i\in N.\phantom{\rule{2em}{0ex}}\end{array}$
Since A = (a ij ), B = (b ij ) are nonnegative irreducible, there exist two positive vectors u, v such that Au = ρ(A)u, Bv = ρ(B)v. Thus, we have
${a}_{ii}+\sum _{j\ne i}\frac{{a}_{ij}{u}_{j}}{{u}_{i}}=\rho \left(A\right),\phantom{\rule{1em}{0ex}}\forall i\in N,$
(2)
and
${b}_{ii}+\sum _{j\ne i}\frac{{b}_{ij}{v}_{j}}{{v}_{i}}=\rho \left(B\right),\phantom{\rule{1em}{0ex}}\forall i\in N.$
(3)
Define U = diag(u1, ..., u n ), V = diag(v1, ..., v n ). Let $Â=\left({Â}_{ij}\right)={U}^{-1}AU$, $\stackrel{^}{B}=\left({\stackrel{^}{B}}_{ij}\right)={V}^{-1}BV$. From (2) and (3), we have
${r}_{i}\left(Â\right)={a}_{ii}+\sum _{j\ne i}\frac{{a}_{ij}{u}_{j}}{{u}_{i}}=\rho \left(A\right),\phantom{\rule{1em}{0ex}}\forall i\in N.$
and
${r}_{i}\left(\stackrel{^}{B}\right)={b}_{ii}+\sum _{j\ne i}\frac{{b}_{ij}{v}_{j}}{{v}_{i}}=\rho \left(B\right),\phantom{\rule{1em}{0ex}}\forall i\in N.$
Let D = VU. According to Lemma 2.2, for the positive diagonal matrix D, we have
$\rho \left(A\circ B\right)\le \underset{\gamma \in \varsigma \left(A\circ B\right)}{\text{max}}{\left[\prod _{i\in \stackrel{̄}{\gamma }}{r}_{i}\left[{D}^{-1}\left(A\circ B\right)D\right]\right]}^{1/|\gamma |}.$
Using Lemma 2.3, we have
${D}^{-1}\left(A\phantom{\rule{2.77695pt}{0ex}}\circ B\right)D={U}^{-1}{V}^{-1}\left(A\circ B\right)VU={U}^{-1}\left(A\phantom{\rule{2.77695pt}{0ex}}\circ \left({V}^{-1}BV\right)\right)U=\left({U}^{-1}AU\right)\circ \left({V}^{-1}BV\right)=Â\circ \stackrel{^}{B}.$
Then,
$\begin{array}{ll}\hfill {r}_{i}\left[{D}^{-1}\left(A\circ B\right)D\right]& ={r}_{i}\left(Â\circ \stackrel{^}{B}\right)={a}_{ii}{b}_{ii}+\sum _{j\ne i}{Â}_{ij}{\stackrel{^}{B}}_{ij}\phantom{\rule{2em}{0ex}}\\ \le {a}_{ii}{b}_{ii}+\sum _{j\ne i}{Â}_{ij}\sum _{j\ne i}{\stackrel{^}{B}}_{ij}={a}_{ii}{b}_{ii}+\left(\rho \left(A\right)-{a}_{ii}\right)\left(\rho \left(B\right)-{b}_{ii}\right).\phantom{\rule{2em}{0ex}}\end{array}$
So, we have
$\rho \left(A\circ B\right)\le \underset{\gamma \in \varsigma \left(A\circ B\right)}{\text{max}}{\left[\prod _{i\in \stackrel{̄}{\gamma }}\left(2{a}_{ii}{b}_{ii}+\rho \left(A\right)\rho \left(B\right)-{a}_{ii}\rho \left(B\right)-{b}_{ii}\rho \left(A\right)\right)\right]}^{1/|\gamma |}.$

If A B is reducible, then one of A and B is reducible. If we denote by P = (p ij ) the n × n permutation matrix with p12 = p23 = · · · = pn 1= 1, the remaining p ij = 0, then both A + tP and B + tP are nonnegative irreducible matrices for any chosen positive real numbers t. Now, we substitute A + tP and B+tP for A and B, respectively in the previous case, and then letting t → 0, the result follows by continuity.

Two bounds for ρ(A B) given in  and , respectively, are
$\rho \left(A\circ B\right)\le \underset{1\le i\le n}{\text{max}}\left\{2{a}_{ii}{b}_{ii}+\rho \left(A\right)\rho \left(B\right)-{a}_{ii}\rho \left(B\right)-{b}_{ii}\rho \left(A\right)\right\},$
(4)
and
$\begin{array}{l}\rho \left(A\circ B\right)\le \underset{i\ne j}{\mathrm{max}}\frac{1}{2}\left\{{a}_{ii}{b}_{ii}+{a}_{jj}{b}_{jj}+\left[{\left\{\left[\left({a}_{ii}{b}_{ii}-{a}_{jj}{b}_{jj}\right)}^{2}\\ {+4\left(\rho \left(A\right)-{a}_{ii}\right)\left(\rho \left(B\right)-{b}_{ii}\right)\left(\rho \left(A\right)-{a}_{jj}\right)\left(\rho \left(B\right)-{b}_{jj}\right)\right]}^{\frac{1}{2}}\right\}.\end{array}$
(5)
Next, we give a simple comparison between (1) and (4). It is easy to see
$\begin{array}{ll}\hfill \rho \left(A\circ B\right)& \le \underset{\gamma \in \varsigma \left(A\circ B\right)}{\text{max}}{\left[\prod _{i\in \stackrel{̄}{\gamma }}\left(2{a}_{ii}{b}_{ii}+\rho \left(A\right)\rho \left(B\right)-{a}_{ii}\rho \left(B\right)-{b}_{ii}\rho \left(A\right)\right)\right]}^{1/|\gamma |}\phantom{\rule{2em}{0ex}}\\ \le \underset{\gamma \in \varsigma \left(A\circ B\right)}{\text{max}}{\left[{\left(\underset{i\in N}{\text{max}}\left\{2{a}_{ii}{b}_{ii}+\rho \left(A\right)\rho \left(B\right)-{a}_{ii}\rho \left(B\right)-{b}_{ii}\rho \left(A\right)\right\}\right)}^{|\gamma |}\right]}^{1/|\gamma |}\phantom{\rule{2em}{0ex}}\\ =\underset{i\in N}{\text{max}}\left\{2{a}_{ii}{b}_{ii}+\rho \left(A\right)\rho \left(B\right)-{a}_{ii}\rho \left(B\right)-{b}_{ii}\rho \left(A\right)\right\}.\phantom{\rule{2em}{0ex}}\end{array}$

Then the bound (1) is better than the bound (4). From the difference between (1) and (5), we could not verify that (1) is better than (5) in theoretical analysis, but the following numerical example shows that the result derived in Theorem 2.1 is better than (4) and (5).

Example 2.1. Consider two 4 × 4 nonnegative matrices
$A=\left(\begin{array}{cccc}\hfill 4\hfill & \hfill 1\hfill & \hfill 0\hfill & \hfill 2\hfill \\ \hfill 0\hfill & \hfill 0.05\hfill & \hfill 1\hfill & \hfill 1\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 4\hfill & \hfill 0.5\hfill \\ \hfill 1\hfill & \hfill 0.5\hfill & \hfill 0\hfill & \hfill 4\hfill \end{array}\right),\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{1em}{0ex}}B=\left(\begin{array}{cccc}\hfill 1\hfill & \hfill 1\hfill & \hfill 1\hfill & \hfill 1\hfill \\ \hfill 1\hfill & \hfill 1\hfill & \hfill 1\hfill & \hfill 1\hfill \\ \hfill 1\hfill & \hfill 1\hfill & \hfill 1\hfill & \hfill 1\hfill \\ \hfill 1\hfill & \hfill 1\hfill & \hfill 1\hfill & \hfill 1\hfill \end{array}\right).$
It is easy to calculate that ρ(A B) = ρ(A) = 5.4983. By inequalities (4) and (5), we have
$\rho \left(A\circ B\right)\le \underset{1\le i\le 4}{\text{max}}\left\{2{a}_{ii}{b}_{ii}+\rho \left(A\right)\rho \left(B\right)-{a}_{ii}\rho \left(B\right)-{b}_{ii}\rho \left(A\right)\right\}=16.3949,$
and ρ(A B) = 11.6478, and by Theorem 2.1, we get
$\rho \left(A\circ B\right)\le \underset{\gamma \in \varsigma \left(A\circ B\right)}{\text{max}}{\left[\prod _{i\in \stackrel{̄}{\gamma }}\left(2{a}_{ii}{b}_{ii}+\rho \left(A\right)\rho \left(B\right)-{a}_{ii}\rho \left(B\right)-{b}_{ii}\rho \left(A\right)\right)\right]}^{1/|\gamma |}=10.0126.$
Next, we will give the second inequality for ρ(A B). For A ≥ 0, write L = A - D, where D = diag(a11, ..., a nn ). with We denote ${J}_{A}={D}_{1}^{-1}L$ with D1 = diag(d ii ), where
${d}_{ii}=\left\{\begin{array}{cc}\hfill {a}_{ii},\hfill & \hfill \text{if}\phantom{\rule{1em}{0ex}}{a}_{ii}\ne 0,\hfill \\ \hfill 1,\hfill & \hfill \text{if}\phantom{\rule{1em}{0ex}}{a}_{ii}=0.\hfill \end{array}\right\$
Then, J A is nonnegative, and J A = A if a ii = 0 for all i. For B ≥ 0, let D2 = diag(s ii ), with
${s}_{ii}=\left\{\begin{array}{cc}\hfill {b}_{ii},\hfill & \hfill \text{if}\phantom{\rule{1em}{0ex}}{b}_{ii}\ne 0,\hfill \\ \hfill 1,\hfill & \hfill \text{if}\phantom{\rule{1em}{0ex}}{b}_{ii}=0.\hfill \end{array}\right\$

Then the nonnegative matrix J B can be similarly defined.

Theorem 2.2 Let A, B M n , and A ≥ 0, B ≥ 0. Then
$\rho \left(A\circ B\right)\le \underset{\gamma \in \varsigma \left(A\circ B\right)}{\mathrm{max}}{\left[\prod _{i\in \overline{\gamma }}\left({a}_{ii}{b}_{ii}+{d}_{ii}\rho \left({J}_{A}\right){s}_{ii}\rho \left({J}_{B}\right)\right]}^{1/|\gamma |}.$
(6)
Proof. If A B is nonnegative irreducible, then A and B are irreducible. Since J A and J B are also nonnegative irreducible, there exist two positive vectors x, y such that J A x = ρ(J A )x, J B y = ρ(J B )y. So, we have
$\sum _{j\ne i}\frac{{a}_{ij}{x}_{j}}{{x}_{i}}={d}_{ii}\rho \left({J}_{A}\right),\phantom{\rule{1em}{0ex}}\sum _{j\ne i}\frac{{b}_{ij}{y}_{j}}{{y}_{i}}={s}_{ii}\rho \left({J}_{B}\right).$

Let $Ã=\left({Ã}_{ij}\right)={Ũ}^{-1}AŨ$, and $\stackrel{̃}{B}=\left({\stackrel{̃}{b}}_{ij}\right)={Ṽ}^{-1}BṼ$ in which and are nonsingular diagonal matrices $Ũ=\text{diag}\left({x}_{1},\cdots \phantom{\rule{0.3em}{0ex}},{x}_{n}\right)$ and $Ṽ=\text{diag}\left({y}_{1},\cdots \phantom{\rule{0.3em}{0ex}},{y}_{n}\right)$.

From Lemma 2.3, we have
${\left(ṼŨ\right)}^{-1}\left(A\circ B\right)\left(ṼŨ\right)=\left({Ũ}^{-1}AŨ\right)\circ \left({Ṽ}^{-1}BṼ\right)=Ã\circ \stackrel{̃}{B},$
and then
${r}_{i}\left(Ã\circ \stackrel{̃}{B}\right)={a}_{ii}{b}_{ii}+\sum _{j\ne i}{Ã}_{ij}{\stackrel{̃}{b}}_{ij}\le {a}_{ii}{b}_{ii}+\sum _{j\ne i}\frac{{a}_{ij}{x}_{j}}{{x}_{i}}\sum _{j\ne i}\frac{{b}_{ij}{y}_{j}}{{y}_{i}}={a}_{ii}{b}_{ii}+{d}_{ii}\rho \left({J}_{A}\right){s}_{ii}\rho \left({J}_{B}\right).$
Let $W=ṼŨ$. Then for the positive diagonal matrix W, it follows from Lemma 2.2 that
$\begin{array}{c}\rho \left(A\circ B\right)\le \underset{\gamma \in \varsigma \left(A\circ B\right)}{\mathrm{max}}{\left[\prod _{i\in \overline{\gamma }}{r}_{i}\left[{W}^{-1}\left(A\circ B\right)W\right]\right]}^{1/|\gamma |}\\ =\underset{\gamma \in \varsigma \left(A\circ B\right)}{\mathrm{max}}{\left[\prod _{i\in \overline{\gamma }}{r}_{i}\left[\stackrel{˜}{A}\circ \stackrel{˜}{B}\right]\right]}^{1/|\gamma |}\\ \le \underset{\gamma \in \varsigma \left(A\circ B\right)}{\mathrm{max}}{\left[\prod _{i\in \overline{\gamma }}\left({a}_{ii}{b}_{ii}+{d}_{ii}\rho \left({J}_{A}\right){s}_{ii}\rho \left({J}_{B}\right)\right]}^{1/|\gamma |}.\end{array}$

If A B is reducible, then substituting A + tP and B + tP for A and B, respectively in the previous case, letting t → 0, the result is derived.

The bounds for ρ(A B) obtained in  and , respectively, are
$\rho \left(A\circ B\right)\le \underset{1\le i\le n}{\text{max}}\left\{{a}_{ii}{b}_{ii}+{d}_{ii}\rho \left({J}_{A}\right){s}_{ii}\rho \left({J}_{B}\right)\right\},$
(7)
and
(8)

It can be easily verified that the bound (6) is better than the bound (7). Here too, we could not give the comparison between (6) and (8), but the following example shows that the result obtained in Theorem 2.2 is better than (7) and (8).

Example 2.2. Let
$A=\left(\begin{array}{cccc}\hfill 2\hfill & \hfill 0\hfill & \hfill 1\hfill & \hfill 1\hfill \\ \hfill 1\hfill & \hfill 4\hfill & \hfill 0.5\hfill & \hfill 0.5\hfill \\ \hfill 1\hfill & \hfill 0\hfill & \hfill 3\hfill & \hfill 0.5\hfill \\ \hfill 0.5\hfill & \hfill 1\hfill & \hfill 1\hfill & \hfill 2\hfill \end{array}\right),\phantom{\rule{1em}{0ex}}\phantom{\rule{2.77695pt}{0ex}}B=\left(\begin{array}{cccc}\hfill 2\hfill & \hfill 0.5\hfill & \hfill 0.5\hfill & \hfill 0.5\hfill \\ \hfill 1\hfill & \hfill 1\hfill & \hfill 1\hfill & \hfill 1\hfill \\ \hfill 0.5\hfill & \hfill 0\hfill & \hfill 2\hfill & \hfill 0.5\hfill \\ \hfill 0\hfill & \hfill 1\hfill & \hfill 1\hfill & \hfill 2\hfill \end{array}\right).$
Then
$A\circ B=\left(\begin{array}{cccc}\hfill 4\hfill & \hfill 0\hfill & \hfill 0.5\hfill & \hfill 0.5\hfill \\ \hfill 1\hfill & \hfill 4\hfill & \hfill 0.5\hfill & \hfill 0.5\hfill \\ \hfill 0.5\hfill & \hfill 0\hfill & \hfill 6\hfill & \hfill 0.25\hfill \\ \hfill 0\hfill & \hfill 1\hfill & \hfill 1\hfill & \hfill 4\hfill \end{array}\right).$
It is clear that ρ(J A ) = 0.8182, ρ(J B ) = 1.1258, and ρ(A B) = 6.3365. By (7) and (8), we have
$\rho \left(A\circ B\right)\le \underset{1\le i\le 4}{\text{max}}\left\{{a}_{ii}{b}_{ii}+{d}_{ii}\rho \left({J}_{A}\right){s}_{ii}\rho \left({J}_{B}\right)\right\}=11.5266,$
and ρ(A B) = 9.6221, and by Theorem 2.2, we get
$\rho \left(A\circ B\right)\le \underset{\gamma \in \varsigma \left(A\circ B\right)}{\mathrm{max}}{\left[\prod _{i\in \overline{\gamma }}\left({a}_{ii}{b}_{ii}+{d}_{ii}\rho \left({J}_{A}\right){s}_{ii}\rho \left({J}_{B}\right)\right]}^{1/|\gamma |}=9.4116.$

3 Conclusions

In this paper, we propose two new upper bounds for the Hadamard product of matrices. These bounds are better than the results of [2, 3] and numerical examples illustrate that our results are superior than the previous results of .

Declarations

Acknowledgements

The author wishes to thank Prof. Guoliang Chen and Dr. Qingbin Liu for their help. This research is financed by NSFC(10971070,11071079).

Authors’ Affiliations

(1)
Department of Mathematics, Dezhou University, Dezhou, 253023 Shandong, China

References 