Open Access

New bounds for the spectral radius for nonnegative tensors

Journal of Inequalities and Applications20152015:166

https://doi.org/10.1186/s13660-015-0689-1

Received: 7 March 2015

Accepted: 11 May 2015

Published: 23 May 2015

Abstract

A lower bound and an upper bound for the spectral radius for nonnegative tensors are obtained. A numerical example is given to show that the new bounds are sharper than the corresponding bounds obtained by Yang and Yang (SIAM J. Matrix Anal. Appl. 31:2517-2530, 2010), and that the upper bound is sharper than that obtained by Li et al. (Numer. Linear Algebra Appl. 21:39-50, 2014).

Keywords

bounds spectral radius nonnegative tensor

MSC

15A69 15A18

1 Introduction

A real order m dimension n tensor \(\mathcal{A}=(a_{i_{1}\cdots i_{m}})\), denoted by \(\mathcal{A}\in R^{[m,n]}\), consists of \(n^{m}\) real entries:
$$a_{i_{1}\cdots i_{m}}\in R, $$
where \(i_{j}=1,\ldots,n\) for \(j=1,\ldots, m\). A tensor \(\mathcal{A}\) is called nonnegative (positive), denoted by \(\mathcal{A}\geq0\) (\(\mathcal{A}>0\)), if every entry \(a_{i_{1}\cdots i_{m}}\geq0\) (\(a_{i_{1}\cdots i_{m}}> 0\), respectively). Given a tensor \(\mathcal{A}=(a_{i_{1}\cdots i_{m}})\in R^{[m,n]}\), if there are a complex number λ and a nonzero complex vector \(x=(x_{1},x_{2},\ldots,x_{n})^{T}\) that are solutions of the following homogeneous polynomial equations:
$$\mathcal{A}x^{m-1}=\lambda x^{[m-1]}, $$
then λ is called an eigenvalue of \(\mathcal{A}\) and x an eigenvector of \(\mathcal{A}\) associated with λ [16], where \(\mathcal{A}x^{m-1}\) and \(x^{[m-1]}\) are vectors, whose ith entries are
$$\bigl(\mathcal{A}x^{m-1}\bigr)_{i}=\sum _{i_{2},\ldots,i_{m}\in N} a_{ii_{2}\cdots i_{m}}x_{i_{2}}\cdots x_{i_{m}} \quad \bigl(N=\{1,2,\ldots,n\}\bigr) $$
and \((x^{[m-1]})_{i}=x_{i}^{m-1}\), respectively. Moreover, the spectral radius \(\rho(\mathcal{A})\) [7] of the tensor \(\mathcal{A}\) is defined as
$$\rho(\mathcal{A})=\max\bigl\{ \vert \lambda \vert : \lambda\mbox{ is an eigenvalue of } \mathcal{A}\bigr\} . $$
Eigenvalues of tensors have become an important topic of study in numerical multilinear algebra, and they have a wide range of practical applications; see [4, 5, 821]. Recently, for the largest eigenvalue of a nonnegative tensor, Chang et al. [2] generalized the well-known Perron-Frobenius theorem for irreducible nonnegative matrices to irreducible nonnegative tensors. Here a tensor \(\mathcal{A}=(a_{i_{1}\cdots i_{m}}) \in R^{m,n}\) is called reducible, if there exists a nonempty proper index subset \(I\subset N\) such that
$$a_{i_{1}i_{2}\cdots i_{m} }=0 \quad \mbox{for all } i_{1}\in I, \mbox{for all } i_{2},\ldots,i_{m}\notin I. $$
If \(\mathcal{A}\) is not reducible, then we call \(\mathcal{A}\) irreducible.

Theorem 1

(Theorem 1.4 in [2])

If \(\mathcal{A} \in R^{[m,n]} \) is irreducible nonnegative, then \(\rho(\mathcal{A})\) is a positive eigenvalue with an entrywise positive eigenvector x, i.e., \(x> 0\), corresponding to it.

Subsequently, Yang and Yang [21] extended this theorem to nonnegative tensors.

Theorem 2

(Theorem 2.3 in [21])

If \(\mathcal{A} \in R^{[m,n]} \) is nonnegative, then \(\rho(\mathcal{A})\) is an eigenvalue with an entrywise nonnegative eigenvector x, i.e., \(x\geq0\), \(x\neq0\), corresponding to it.

For the spectral radius of a nonnegative tensor, Yang and Yang [21] provided a lower bound and an upper bound for the spectral radius of a nonnegative tensor.

Theorem 3

(Lemma 5.2 in [21])

Let \(\mathcal{A}=(a_{i_{1}\cdots i_{m}}) \in R^{[m,n]} \) be nonnegative. Then
$$R_{\mathrm{min}}\leq\rho(\mathcal{A})\leq R_{\mathrm{max}}, $$
where \(R_{\mathrm{min}}=\min_{i\in N} R_{i}(\mathcal{A})\), \(R_{\mathrm{max}}= \max_{i\in N} R_{i}(\mathcal{A})\), and \(R_{i}(\mathcal{A}) =\sum_{i_{2},\ldots,i_{m}\in N } a_{ii_{2}\cdots i_{m}}\).

In order to obtain much sharper bounds of the spectral radius of a nonnegative tensor, Li et al. [22] have given an upper bound which estimates the spectral radius more precisely than that in Theorem 3.

Theorem 4

(Theorems 3.3 and 3.5 in [22])

Let \(\mathcal{A}=(a_{i_{1}\cdots i_{m}}) \in R^{[m,n]} \) be nonnegative with \(n \geq2\). Then
$$\rho(\mathcal{A})\leq\Omega_{\mathrm{max}}, $$
where
$$\Omega_{\mathrm{max}}= \max_{\substack{i,j\in N,\\ j\neq i}} \frac{1}{2} \Bigl( a_{i\cdots i}+a_{j\cdots j}+r_{i}^{j}(\mathcal{A})+ \sqrt{ \bigl( a_{i\cdots i}-a_{j\cdots j}+r_{i}^{j}( \mathcal{A}) \bigr)^{2}+ 4a_{ij\cdots j} r_{j}( \mathcal{A})} \Bigr). $$
Furthermore, \(\Omega_{\mathrm{max}}\leq R_{\mathrm{max}}\).

In this paper, we continue this research, and we give a lower bound and an upper bound for \(\rho(\mathcal{A})\) of a nonnegative tensor \(\mathcal{A}\), which all depend only on the entries of \(\mathcal{A}\). It is proved that these bounds are shaper than the corresponding bounds in [21] and [22]. A numerical example is also given to verify the obtained results.

2 New bounds for the spectral radius of nonnegative tensors

In this section, bounds for the spectral radius of a nonnegative tensors are obtained. We first give some notation. Given a nonnegative tensor \(\mathcal{A}=(a_{i_{1}\cdots i_{m}})\in R^{[m,n]}\), we denote
$$\begin{aligned}& \Theta_{i}=\bigl\{ (i_{2},i_{3}, \ldots,i_{m}): i_{j}=i \mbox{ for some } j \in\{2,\ldots,m\}, \mbox{where } i,i_{2},\ldots,i_{m}\in N\bigr\} , \\& \overline{\Theta}_{i}=\bigl\{ (i_{2},i_{3}, \ldots,i_{m}): i_{j}\neq i \mbox{ for any } j \in\{2,\ldots,m \}, \mbox{where } i,i_{2},\ldots,i_{m}\in N\bigr\} , \\& r_{i}(\mathcal{A})=\sum_{\substack{i_{2},\ldots,i_{m}\in N,\\ \delta _{ii_{2}\cdots i_{m}}=0}} a_{ii_{2}\cdots i_{m}}=\sum_{i_{2},\ldots,i_{m}\in N} a_{ii_{2}\cdots i_{m}}-a_{i\cdots i}=R_{i}( \mathcal{A})-a_{i\cdots i}, \\& r_{i}^{j}(\mathcal {A})=\sum_{\substack{\delta_{ii_{2}\ldots i_{m}}=0,\\ \delta_{ji_{2}\cdots i_{m}}=0}} a_{ii_{2}\cdots i_{m}}=\sum_{\substack{i_{2},\ldots,i_{m}\in N,\\ \delta_{ii_{2}\ldots i_{m}}=0}} a_{ii_{2}\cdots i_{m}}-a_{ij\cdots j}=r_{i}( \mathcal{A})-a_{ij\cdots j}, \\& r_{i}^{\Theta_{i}}(\mathcal{A})=\sum_{\substack{(i_{2},\ldots,i_{m})\in\Theta_{i},\\ \delta_{ii_{2}\cdots i_{m}}=0}} |a_{ii_{2}\cdots i_{m}}|,\qquad r_{i}^{\overline{\Theta}_{i}}(\mathcal {A})=\sum _{(i_{2},\ldots,i_{m})\in\overline{\Theta}_{i}} |a_{ii_{2}\cdots i_{m}}|, \end{aligned}$$
where
$$\delta_{i_{1}\cdots i_{m}}=\left \{ \begin{array}{l@{\quad}l} 1, &\mbox{if } i_{1}=\cdots=i_{m}, \\ 0, &\mbox{otherwise}. \end{array} \right . $$
Obviously, \(r_{i}(\mathcal{A})= r_{i}^{\Theta_{i}}(\mathcal {A})+r_{i}^{\overline{\Theta}_{i}}(\mathcal{A})\), and \(r_{i}^{j}(\mathcal {A})= r_{i}^{\Theta_{i}}(\mathcal {A})+r_{i}^{\overline{\Theta}_{i}}(\mathcal{A})-|a_{ij\cdots j}|\).

For an irreducible nonnegative tensor, we give the following bounds for the spectral radius.

Lemma 1

Let \(\mathcal{A}=(a_{i_{1}\cdots i_{m}})\in R^{[m,n]}\) be an irreducible nonnegative tensor with \(n \geq2\). Then
$$\Delta_{\mathrm{min}} \leq\rho(\mathcal{A})\leq \Delta_{\mathrm{max}}, $$
where
$$\Delta_{\mathrm{min}}=\min_{\substack{i,j\in N,\\ j\neq i}} \Delta_{i,j}( \mathcal{A}), \qquad \Delta_{\mathrm{max}}=\max_{\substack{i,j\in N, \\ j\neq i}} \Delta_{i,j}(\mathcal{A}) $$
and
$$\Delta_{i,j}(\mathcal{A})=\frac{1}{2} \Bigl( a_{i\cdots i}+ a_{j\cdots j}+r_{i}^{\Theta_{i}}(\mathcal{A}) + \sqrt{ \bigl(a_{i\cdots i}- a_{j\cdots j}+r_{i}^{\Theta_{i}}( \mathcal{A}) \bigr)^{2}+4r_{i}^{\overline{\Theta }_{i}}( \mathcal{A})r_{j}(\mathcal {A})} \Bigr). $$

Proof

Let \(x=(x_{1},x_{2},\ldots,x_{n})^{T}\) be an entrywise positive eigenvector of \(\mathcal{A}\) corresponding to \(\rho(\mathcal{A})\), that is,
$$ \mathcal{A}x^{m-1}=\rho(\mathcal{A})x^{[m-1]}. $$
(1)
Without loss of generality, suppose that
$$x_{t_{n}}\geq x_{t_{n-1}} \geq\cdots\geq x_{t_{2}} \geq x_{t_{1}}>0. $$
(i) We first prove
$$\Delta_{\mathrm{min}}=\min_{\substack{i,j\in N, \\ j\neq i}} \Delta_{i,j}( \mathcal {A})\leq \rho(\mathcal{A}). $$
From (1), we have
$$\sum_{i_{2},\ldots,i_{m}\in N} a_{t_{1}i_{2}\cdots i_{m}}x_{i_{2}}\cdots x_{i_{m}}=\rho(\mathcal{A}) x_{t_{1}}^{m-1}, $$
equivalently,
$$\bigl(\rho(\mathcal{A})-a_{t_{1}\cdots t_{1}}\bigr)x_{t_{1}}^{m-1}= \sum_{\substack{(i_{2},\ldots,i_{m})\in \Theta_{t_{1}},\\ \delta_{t_{1}i_{2}\ldots i_{m}}=0}} a_{t_{1}i_{2}\cdots i_{m}}x_{i_{2}}\cdots x_{i_{m}}+ \sum_{(i_{2},\ldots ,i_{m})\in \overline{\Theta}_{t_{1}} }a_{t_{1}i_{2}\cdots i_{m}}x_{i_{2}} \cdots x_{i_{m}}. $$
Hence,
$$\begin{aligned} \bigl(\rho(\mathcal{A})-a_{t_{1}\cdots t_{1}}\bigr)x_{t_{1}}^{m-1} \geq& \sum_{\substack{(i_{2},\ldots,i_{m})\in\Theta_{t_{1}}, \\ \delta_{t_{1}i_{2}\ldots i_{m}}=0}} a_{t_{1}i_{2}\cdots i_{m}}x_{t_{1}}^{m-1}+ \sum_{(i_{2},\ldots,i_{m})\in \overline{\Theta}_{t_{1}}} a_{t_{1}i_{2}\cdots i_{m}}x_{t_{2}}^{m-1} \\ =&r_{t_{1}}^{\Theta_{t_{1}}}(\mathcal{A})x_{t_{1}}^{m-1}+ r_{t_{1}}^{\overline{\Theta}_{t_{1}}}(\mathcal{A})x_{t_{2}}^{m-1}, \end{aligned}$$
i.e.,
$$ \bigl(\rho(\mathcal{A})-a_{t_{1}\cdots t_{1}} -r_{t_{1}}^{\Theta_{t_{1}}}( \mathcal{A}) \bigr)x_{t_{1}}^{m-1}\geq r_{t_{1}}^{\overline{\Theta}_{t_{1}}}( \mathcal{A})x_{t_{2}}^{m-1} \geq 0. $$
(2)
Similarly, we have, from (1),
$$\sum_{i_{2},\ldots,i_{m}\in N} a_{t_{2}i_{2}\cdots i_{m}}x_{i_{2}}\cdots x_{i_{m}}=\rho(\mathcal{A}) x_{t_{2}}^{m-1} $$
and
$$ \bigl(\rho(\mathcal{A})-a_{t_{2}\cdots t_{2}} \bigr)x_{t_{2}}^{m-1} \geq r_{t_{2}}(\mathcal{A})x_{t_{1}}^{m-1}\geq 0. $$
(3)
Multiplying inequality (3) with inequality (2) gives
$$\bigl(\rho(\mathcal{A})-a_{t_{1}\cdots t_{1}} -r_{t_{1}}^{\Theta_{t_{1}}}( \mathcal{A}) \bigr) \bigl(\rho(\mathcal {A})-a_{t_{2}\cdots t_{2}} \bigr)x_{t_{1}}^{m-1} x_{t_{2}}^{m-1}\geq r_{t_{2}}(\mathcal{A}) r_{t_{1}}^{\overline{\Theta}_{t_{1}}}(\mathcal{A}) x_{t_{1}}^{m-1}x_{t_{2}}^{m-1}. $$
Note that \(x_{t_{2}}\geq x_{t_{1}}>0\), hence
$$\bigl(\rho(\mathcal{A})-a_{t_{1}\cdots t_{1}} -r_{t_{1}}^{\Theta_{t_{1}}}( \mathcal{A}) \bigr) \bigl(\rho(\mathcal{A})-a_{t_{2}\cdots t_{2}} \bigr) \geq r_{t_{2}}(\mathcal{A}) r_{t_{1}}^{\overline{\Theta}_{t_{1}}}(\mathcal{A}), $$
that is,
$$\rho(\mathcal{A})^{2}- \bigl( a_{t_{1}\cdots t_{1}} +a_{t_{2}\cdots t_{2}} +r_{t_{1}}^{\Theta_{t_{1}}}(\mathcal{A}) \bigr)\rho(\mathcal {A})+a_{t_{2}\cdots t_{2}} \bigl( a_{t_{1}\cdots t_{1}} +r_{t_{1}}^{\Theta_{t_{1}}}( \mathcal{A}) \bigr)\geq r_{t_{2}}(\mathcal{A}) r_{t_{1}}^{\overline{\Theta}_{t_{1}}}( \mathcal{A}). $$
Furthermore, since
$$\bigl(a_{t_{1}\cdots t_{1}}+a_{t_{2}\cdots t_{2}}+r_{t_{1}}^{\Theta_{t_{1}}}( \mathcal{A}) \bigr)^{2}-4a_{t_{2}\cdots t_{2}} \bigl(a_{t_{1}\cdots t_{1}}+r_{t_{1}}^{\Theta_{t_{1}}}( \mathcal{A}) \bigr)= \bigl(a_{t_{1}\cdots t_{1}}-a_{t_{2}\cdots t_{2}}+r_{t_{1}}^{\Theta_{t_{1}}}( \mathcal{A}) \bigr)^{2}, $$
then solving for \(\rho(\mathcal{A})\) gives
$$\rho(\mathcal{A})\geq\Delta_{t_{1},t_{2}}(\mathcal{A})\geq \min _{\substack{i,j\in N, \\ j\neq i}} \Delta_{i,j}(\mathcal{A})=\Delta_{\mathrm{min}}. $$
(ii) We now prove
$$\rho(\mathcal{A})\leq\max_{\substack{i,j\in N,\\ j\neq i}} \Delta_{i,j}( \mathcal{A})=\Delta_{\mathrm{max}}. $$
From (1), we have
$$\sum_{i_{2},\ldots,i_{m}\in N} a_{t_{n}i_{2}\cdots i_{m}}x_{i_{2}}\cdots x_{i_{m}}=\rho(\mathcal{A}) x_{t_{n}}^{m-1} $$
and
$$\sum_{i_{2},\ldots,i_{m}\in N} a_{t_{n-1}i_{2}\cdots i_{m}}x_{i_{2}}\cdots x_{i_{m}}=\rho(\mathcal{A}) x_{t_{n-1}}^{m-1}. $$
Similar to the proof in (i), we obtain easily
$$\rho(\mathcal{A})\leq \Delta_{t_{n},t_{n-1}}(\mathcal{A})\leq\max _{\substack{i,j\in N,\\ j\neq i}} \Delta_{i,j}(\mathcal{A})=\Delta_{\mathrm{max}}. $$
The conclusion follows from (i) and (ii). □

Now we establish upper and lower bounds for \(\rho(\mathcal{A})\) of a nonnegative tensor \(\mathcal{A}\).

Lemma 2

(Lemma 3.3 in [21])

Suppose \(0\leq \mathcal{A}< \mathcal{C}\). Then \(\rho(\mathcal{A}) \leq\rho (\mathcal{C})\).

Theorem 5

Let \(\mathcal{A}=(a_{i_{1}\cdots i_{m}})\in R^{[m,n]}\) be a nonnegative tensor with \(n \geq2\). Then
$$\Delta_{\mathrm{min}} \leq\rho(\mathcal{A})\leq \Delta_{\mathrm{max}}. $$

Proof

Let \(\mathcal{A}_{k} =\mathcal{A} + \frac{1}{k}\mathcal{E}\), where \(k=1,2,\ldots\) , and \(\mathcal{E}\) denote the tensor with every entry being 1. Then \(\mathcal{A}_{k}\) is a sequence of positive tensors satisfying
$$0 \leq\mathcal{A}< \cdots \mathcal{A}_{k+1} < \mathcal{A}_{k} < \cdots < \mathcal{A}_{1}. $$
By Lemma 2, \(\{\rho(\mathcal{A}_{k})\}_{k=1}^{+\infty}\) is a monotone decreasing sequence with lower bound \(\rho(\mathcal{A})\). From the proof of Theorem 2.3 in [21], we have
$$\lim_{k\rightarrow+\infty} \rho(\mathcal{A}_{k})= \rho( \mathcal{A}). $$
Note that for any \(i,j\in N\), \(j\neq i\),
$$\Delta_{i,j}(\mathcal{A})< \cdots< \Delta_{i,j}( \mathcal{A}_{k+1}) < \Delta_{i,j}(\mathcal{A}_{k}) < \cdots< \Delta_{i,j}(\mathcal{A}_{1}), $$
we obtain easily
$$\lim_{k\rightarrow+\infty} \Delta_{i,j}(\mathcal{A}_{k})= \Delta _{i,j}(\mathcal{A}). $$
Furthermore, since \(\mathcal{A}_{k}\) is positive and also irreducible nonnegative for \(k=1,2,\ldots\) , we have, from Lemma 1,
$$\min_{\substack{i,j\in N, \\ j\neq i}} \Delta_{i,j}(\mathcal{A}_{k}) \leq\rho(\mathcal{A}_{k})\leq \max_{\substack{i,j\in N, \\ j\neq i}} \Delta_{i,j}(\mathcal{A}_{k}). $$
Letting \(k\rightarrow+\infty\), then
$$\Delta_{\mathrm{min}}=\min_{\substack{i,j\in N, \\ j\neq i}} \Delta_{i,j}( \mathcal{A})\leq\rho(\mathcal{A})\leq \max_{\substack{i,j\in N, \\ j\neq i}} \Delta_{i,j}(\mathcal{A})=\Delta_{\mathrm{max}}. $$
The proof is completed. □

We next compare the bounds in Theorem 5 with those in Theorem 3.

Theorem 6

Let \(\mathcal{A}=(a_{i_{1}\cdots i_{m}})\in R^{[m,n]}\) be a nonnegative tensor with \(n \geq2\). Then
$$ R_{\mathrm{min}}\leq\Delta_{\mathrm{min}} \leq \Delta_{\mathrm{max}} \leq R_{\mathrm{max}}. $$
(4)

Proof

We first prove \(R_{\mathrm{min}}\leq\Delta_{\mathrm{min}}\). For any \(i,j\in N\), \(j\neq i\), if \(R_{i}(\mathcal{A})\leq R_{j}(\mathcal{A})\), then
$$a_{ii\cdots i}-a_{jj\cdots j}+r_{i}^{\Theta_{i}}(\mathcal{A}) +r_{i}^{\overline{\Theta}_{i}}(\mathcal{A}) \leq r_{j}(\mathcal{A}). $$
Hence,
$$\begin{aligned}& \bigl(a_{i\cdots i}- a_{j\cdots j}+r_{i}^{\Theta_{i}}( \mathcal{A}) \bigr)^{2}+ 4r_{i}^{\overline{\Theta}_{i}}( \mathcal{A})r_{j}(\mathcal{A}) \\& \quad \geq \bigl(a_{i\cdots i}- a_{j\cdots j}+r_{i}^{\Theta_{i}}(\mathcal{A}) \bigr)^{2} \\& \qquad {}+4r_{i}^{\overline{\Theta}_{i}}(\mathcal{A}) \bigl( a_{ii\cdots i}-a_{jj\cdots j}+r_{i}^{\Theta_{i}}( \mathcal{A}) + r_{i}^{\overline{\Theta}_{i}}(\mathcal {A}) \bigr) \\& \quad = \bigl(a_{i\cdots i}- a_{j\cdots j}+r_{i}^{\Theta_{i}}( \mathcal{A}) \bigr)^{2} \\& \qquad {}+4r_{i}^{\overline{\Theta}_{i}}(\mathcal{A}) \bigl( a_{ii\cdots i}-a_{jj\cdots j}+r_{i}^{\Theta_{i}}( \mathcal{A}) \bigr) +4 \bigl(r_{i}^{\overline{\Theta}_{i}}(\mathcal{A}) \bigr)^{2} \\& \quad = \bigl( a_{i\cdots i}-a_{j\cdots j}+r_{i}^{\Theta_{i}}( \mathcal{A})+2r_{i}^{\overline{\Theta}_{i}}(\mathcal{A}) \bigr)^{2}. \end{aligned}$$
When
$$a_{i\cdots i}-a_{j\cdots j}+r_{i}^{\Theta_{i}} ( \mathcal{A})+2r_{i}^{\overline{\Theta}_{i}}(\mathcal{A}) > 0, $$
we have
$$\begin{aligned}& a_{i\cdots i}+ a_{j\cdots j}+r_{i}^{\Theta_{i}}(\mathcal{A}) + \sqrt{ \bigl(a_{i\cdots i}- a_{j\cdots j}+r_{i}^{\Theta_{i}}( \mathcal{A}) \bigr)^{2}+4r_{i}^{\overline{\Theta}_{i}} ( \mathcal{A})r_{j}(\mathcal{A})} \\& \quad \geq a_{i\cdots i}+ a_{j\cdots j}+r_{i}^{\Theta_{i}}( \mathcal{A})+ \bigl( a_{i\cdots i}-a_{j\cdots j}+r_{i}^{\Theta_{i}}( \mathcal{A})+2r_{i}^{\overline {\Theta}_{i}}(\mathcal{A}) \bigr) \\& \quad = 2 \bigl( a_{i\cdots i}+r_{i}^{\Theta_{i}}( \mathcal{A})+r_{i}^{\overline {\Theta}_{i}}(\mathcal{A}) \bigr) \\& \quad = 2R_{i}(\mathcal{A}). \end{aligned}$$
When
$$a_{i\cdots i}-a_{j\cdots j}+r_{i}^{\Theta_{i}}( \mathcal{A})+2r_{i}^{\overline{\Theta}_{i}} (\mathcal{A})\leq0, $$
that is,
$$a_{i\cdots i}+r_{i}^{\Theta_{i}}(\mathcal{A})+2r_{i}^{\overline{\Theta }_{i}}( \mathcal{A})\leq a_{j\cdots j}, $$
we have
$$\begin{aligned}& a_{i\cdots i}+ a_{j\cdots j}+r_{i}^{\Theta_{i}}(\mathcal{A}) + \sqrt{ \bigl(a_{i\cdots i}- a_{j\cdots j}+r_{i}^{\Theta_{i}}( \mathcal {A}) \bigr)^{2}+4r_{i}^{\overline{\Theta}_{i}}( \mathcal{A})r_{j}(\mathcal {A})} \\& \quad \geq a_{i\cdots i}+ a_{j\cdots j}+r_{i}^{\Theta_{i}}( \mathcal{A}) + \sqrt{ \bigl(a_{i\cdots i}- a_{j\cdots j}+r_{i}^{\Theta_{i}}( \mathcal{A}) \bigr)^{2}} \\& \quad = a_{i\cdots i}+ a_{j\cdots j}+r_{i}^{\Theta_{i}}( \mathcal{A})- \bigl( a_{i\cdots i}-a_{j\cdots j}+r_{i}^{\Theta_{i}}( \mathcal{A}) \bigr) \\& \quad = 2a_{j\cdots j} \\& \quad \geq 2 \bigl(a_{i\cdots i}+r_{i}^{\Theta_{i}}( \mathcal{A})+2r_{i}^{\overline{\Theta}_{i}}(\mathcal {A}) \bigr) \\& \quad \geq 2 \bigl(a_{i\cdots i}+r_{i}^{\Theta_{i}}( \mathcal{A})+r_{i}^{\overline{\Theta}_{i}} (\mathcal{A}) \bigr) \\& \quad = 2R_{i}(\mathcal{A}) . \end{aligned}$$
Therefore,
$$\frac{1}{2} \Bigl(a_{i\cdots i}+ a_{j\cdots j}+r_{i}^{\Theta_{i}}( \mathcal{A}) + \sqrt{ \bigl(a_{i\cdots i}- a_{j\cdots j}+r_{i}^{\Theta_{i}}( \mathcal{A}) \bigr)^{2}+4r_{i}^{\overline{\Theta }_{i}}( \mathcal{A})r_{j}(\mathcal {A})} \Bigr)\geq R_{i}( \mathcal{A}), $$
which implies
$$\begin{aligned}& \min_{\substack{i,j\in N, \\ j\neq i}}\frac{1}{2} \Bigl(a_{i\cdots i}+ a_{j\cdots j}+r_{i}^{\Theta_{i}}(\mathcal{A}) + \sqrt{ \bigl(a_{i\cdots i}- a_{j\cdots j}+r_{i}^{\Theta_{i}}( \mathcal{A}) \bigr)^{2}+4r_{i}^{\overline{\Theta }_{i}}(\mathcal{A}) r_{j}(\mathcal{A})} \Bigr) \\& \quad \geq \min_{i\in N}R_{i}(\mathcal{A}), \end{aligned}$$
i.e., \(R_{\mathrm{min}} \leq\Delta_{\mathrm{min}}\).
On the other hand, if for any \(i,j\in N\), \(j\neq i\),
$$R_{j}(\mathcal{A}) \leq R_{i}(\mathcal{A}), $$
then
$$a_{jj\cdots j}-a_{ii\cdots i}-r_{i}^{\Theta_{i}}(\mathcal{A}) +r_{j}(\mathcal{A}) \leq r_{i}^{\overline{\Theta}_{i}}(\mathcal{A}). $$
Similarly, we can also obtain
$$\frac{1}{2} \Bigl(a_{i\cdots i}+ a_{j\cdots j}+r_{i}^{\Theta_{i}}( \mathcal{A}) + \sqrt{ \bigl(a_{i\cdots i}- a_{j\cdots j}+r_{i}^{\Theta_{i}}( \mathcal{A}) \bigr)^{2}+4r_{i}^{\overline{\Theta }_{i}}( \mathcal{A})r_{j}(\mathcal {A})} \Bigr)\geq R_{j}( \mathcal{A}), $$
and that \(R_{\mathrm{min}}\leq\Delta_{\mathrm{min}}\). Hence, the first inequality in (4) holds. In a similar way, we can prove that the last inequality in (4) also holds. The conclusion follows. □

Example 1

Consider the nonnegative tensor
$$\mathcal{A}=\bigl[A(:,:,1),A(:,:,2),A(:,:,3)\bigr]\in R^{[3,3]}, $$
where
$$\begin{aligned}& A(:,:,1)=\left ( \begin{array}{@{}c@{\quad}c@{\quad}c@{}} 0.2192 &0.4411& 0.5232\\ 0.7637 &0.5239& 0.8330\\ 0.7993 &0.3710& 0.5328 \end{array} \right ), \\& A(:,:,2)= \left ( \begin{array}{@{}c@{\quad}c@{\quad}c@{}} 0.4380& 0.0482& 0.1325\\ 0.1803& 0.6729& 0.1809 \\ 0.3773& 0.1079& 0.8965 \end{array} \right ), \\& A(:,:,3)=\left ( \begin{array}{@{}c@{\quad}c@{\quad}c@{}} 0.0779 & 0.1982 & 0.4691\\ 0.5135 & 0.8284 & 0.7352\\ 0.1135 & 0.1163 & 0.8645 \end{array} \right ). \end{aligned}$$
We now compute the bounds for \(\rho(\mathcal{A})\). By Theorem 3, we have
$$2.5474 \leq\rho(\mathcal{A}) \leq5.2318. $$
By Theorem 4, we have
$$\rho(\mathcal{A})\leq5.0753. $$
By Theorem 5, we have
$$3.0097 \leq\rho(\mathcal{A})\leq 4.7894. $$
It is easy to see that the bounds in Theorem 5 are sharper than those in Theorem 3 (Lemma 5.2 of [21]), and that the upper bound in Theorem 5 is sharper than that in Theorem 4 (Theorem 3.3 of [22]) in some cases.

3 Conclusions

In this paper, we obtain a lower and an upper bound for the spectral radius of a nonnegative tensor, which improved the known bounds obtained by Yang and Yang [21], and Li et al. [22].

Declarations

Acknowledgements

The authors are very indebted to the referees for their valuable comments and corrections, which improved the original manuscript of this paper. This work was supported by National Natural Science Foundations of China (11361074), Applied Basic Research Programs of Science and Technology Department of Yunnan Province (2013FD002) and IRTSTYN.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
College of Mathematics, Jilin University
(2)
School of Mathematics and Statistics, Yunnan University

References

  1. Cartwright, D, Sturmfels, B: The number of eigenvalues of a tensor. Linear Algebra Appl. 438, 942-952 (2013) View ArticleMATHMathSciNetGoogle Scholar
  2. Chang, KC, Pearson, K, Zhang, T: Perron-Frobenius theorem for nonnegative tensors. Commun. Math. Sci. 6, 507-520 (2008) View ArticleMATHMathSciNetGoogle Scholar
  3. De Lathauwer, L, De Moor, B, Vandewalle, J: A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl. 21, 1253-1278 (2000) View ArticleMATHMathSciNetGoogle Scholar
  4. Lim, LH: Singular values and eigenvalues of tensors: a variational approach. In: CAMSAP ’05: Proceeding of the IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, pp. 129-132 (2005) Google Scholar
  5. Qi, L: Eigenvalues of a real supersymmetric tensor. J. Symb. Comput. 40, 1302-1324 (2005) View ArticleMATHGoogle Scholar
  6. Wang, Y, Qi, L, Zhang, X: A practical method for computing the largest M-eigenvalue of a fourth-order partially symmetric tensor. Numer. Linear Algebra Appl. 16, 589-601 (2009) View ArticleMATHMathSciNetGoogle Scholar
  7. Yang, Q, Yang, Y: Further results for Perron-Frobenius theorem for nonnegative tensors II. SIAM J. Matrix Anal. Appl. 32, 1236-1250 (2011) View ArticleMATHMathSciNetGoogle Scholar
  8. De Lathauwer, L, De Moor, B, Vandewalle, J: On the best rank-1 and rank-\((R_{1},R_{2},\ldots,R_{N})\) approximation of higher-order tensors. SIAM J. Matrix Anal. Appl. 21, 1324-1342 (2000) View ArticleMATHMathSciNetGoogle Scholar
  9. Friedland, S, Gaubert, S, Han, L: Perron-Frobenius theorem for nonnegative multilinear forms and extensions. Linear Algebra Appl. 438, 738-749 (2013) View ArticleMATHMathSciNetGoogle Scholar
  10. He, J, Huang, TZ: Upper bound for the largest Z-eigenvalue of positive tensors. Appl. Math. Lett. 38, 110-114 (2014) View ArticleMathSciNetGoogle Scholar
  11. Hu, SL, Huang, ZH, Ling, C, Qi, L: E-determinants of tensors (2011). arXiv:1109.0348v3 [math.NA]
  12. Kofidis, E, Regalia, PA: On the best rank-1 approximation of higher-order supersymmetric tensors. SIAM J. Matrix Anal. Appl. 23, 863-884 (2002) View ArticleMATHMathSciNetGoogle Scholar
  13. Kolda, TG, Mayo, JR: Shifted power method for computing tensor eigenpairs (2011). arXiv:1007.1267v2 [math.NA]
  14. Ni, Q, Qi, L, Wang, F: An eigenvalue method for the positive definiteness identification problem. IEEE Trans. Autom. Control 53, 1096-1107 (2008) View ArticleMathSciNetGoogle Scholar
  15. Ni, G, Qi, L, Wang, F, Wang, Y: The degree of the E-characteristic polynomial of an even order tensor. J. Math. Anal. Appl. 329, 1218-1229 (2007) View ArticleMATHMathSciNetGoogle Scholar
  16. Ng, M, Qi, L, Zhou, G: Finding the largest eigenvalue of a nonnegative tensor. SIAM J. Matrix Anal. Appl. 31, 1090-1099 (2009) View ArticleMathSciNetGoogle Scholar
  17. Qi, L: Eigenvalues and invariants of tensors. J. Math. Anal. Appl. 325, 1363-1377 (2007) View ArticleMATHMathSciNetGoogle Scholar
  18. Qi, L, Sun, W, Wang, Y: Numerical multilinear algebra and its applications. Front. Math. China 2, 501-526 (2007) View ArticleMATHMathSciNetGoogle Scholar
  19. Qi, L, Wang, F, Wang, Y: Z-Eigenvalue methods for a global polynomial optimization problem. Math. Program. 118, 301-316 (2009) View ArticleMATHMathSciNetGoogle Scholar
  20. Qi, L, Wang, Y, Wu, EX: D-Eigenvalues of diffusion kurtosis tensors. J. Comput. Appl. Math. 221, 150-157 (2008) View ArticleMATHMathSciNetGoogle Scholar
  21. Yang, Y, Yang, Q: Further results for Perron-Frobenius theorem for nonnegative tensors. SIAM J. Matrix Anal. Appl. 31, 2517-2530 (2010) View ArticleMATHMathSciNetGoogle Scholar
  22. Li, C, Li, Y, Kong, X: New eigenvalue inclusion sets for tensors. Numer. Linear Algebra Appl. 21, 39-50 (2014) View ArticleMATHMathSciNetGoogle Scholar

Copyright

© Li and Li 2015