Skip to main content

Advertisement

Springer Nature is making Coronavirus research free. View research | View latest news | Sign up for updates

A sharp upper bound on the spectral radius of a nonnegative k-uniform tensor and its applications to (directed) hypergraphs

  • 139 Accesses

Abstract

In this paper, we obtain a sharp upper bound on the spectral radius of a nonnegative k-uniform tensor and characterize when this bound is achieved. Furthermore, this result deduces the main result in [X. Duan and B. Zhou, Sharp bounds on the spectral radius of a nonnegative matrix, Linear Algebra Appl. 439:2961–2970, 2013] for nonnegative matrices; improves the adjacency spectral radius and signless Laplacian spectral radius of a uniform hypergraph for some known results in [D.M. Chen, Z.B. Chen and X.D. Zhang, Spectral radius of uniform hypergraphs and degree sequences, Front. Math. China 6:1279–1288, 2017]; and presents some new sharp upper bounds for the adjacency spectral radius and signless Laplacian spectral radius of a uniform directed hypergraph. Moreover, a characterization of a strongly connected k-uniform directed hypergraph is obtained.

Introduction

Let k, n be two positive integers. As in [17, 21], an order k dimension n tensor \(\mathbb {A}=(a_{i_{1}\cdots i_{k}})\) over the real field \(\mathbb{R}\) is a multidimensional array with \(n^{k}\) entries \(a_{i_{1}\cdots i_{k}}\in\mathbb{R}\), where \(i_{j}\in[n]=\{1,2,\ldots, n\}\), \(j\in[k]=\{ 1,2,\ldots, k\}\). Obviously, a vector is an order 1 tensor and a square matrix is an order 2 tensor.

Furthermore, we call a tensor \(\mathbb{A}\) nonnegative (positive), denoted by \(\mathbb{A}\geq0\) (\(\mathbb{A}>0\)), if every entry has \(a_{i_{1}\cdots i_{k}}\geq0\) (\(a_{i_{1}\cdots i_{k}}>0\)). The tensor \(\mathbb{A}=(a_{i_{1}\cdots i_{k}})\) is called symmetric if \(a_{i_{1}\cdots i_{k}}=a_{\sigma(i_{1})\cdots\sigma(i_{k})}\), where σ is any permutation of the indices.

Let \(\mathbb{A}\) be an order k dimension n tensor. If there is a complex number λ and a nonzero complex vector \(x=(x_{1},x_{2},\ldots, x_{n})^{T}\) such that

$$\mathbb{A}x^{k-1}=\lambda x^{[k-1]}, $$

then λ is called an eigenvalue of \(\mathbb{A}\) and x an eigenvector of \(\mathbb{A}\) corresponding to the eigenvalue λ [17, 18, 21]. Here \(\mathbb{A}x^{k-1}\) and \(x^{[k-1]}\) are vectors, whose ith entries are

$$\bigl(\mathbb{A}x^{k-1} \bigr)_{i}=\sum _{i_{2},\ldots, i_{k}=1}^{n}a_{ii_{2}\cdots i_{k}}x_{i_{2}}\cdots x_{i_{k}} $$

and \((x^{[k-1]})_{i}=x_{i}^{k-1}\), respectively. Moreover, the spectral radius \(\rho(\mathbb{A})\) of a tensor \(\mathbb{A}\) is defined as

$$\rho(\mathbb{A})=\max \bigl\{ \vert \lambda \vert : \lambda \mbox{ is an eigenvalue of } \mathbb{A} \bigr\} . $$

Some properties of the spectral radius of a nonnegative tensor can be found in [3, 9, 14, 1618, 21, 2527].

Definition 1.1

([22])

Let \(\mathbb{A}\) and \(\mathbb{B}\) be two tensors with order \(m\geq2\) and \(k\geq1\) dimension n, respectively. The general product \(\mathbb {AB}\) of \(\mathbb{A}\) and \(\mathbb{B}\) is the following tensor \(\mathbb{C}\) with order \((m-1)(k-1)+1\) and dimension n:

$$c_{i\alpha_{1}\cdots\alpha_{m-1}}=\sum_{i_{2},\ldots, i_{m}=1}^{n}a_{ii_{2}\cdots i_{m}}b_{i_{2}\alpha_{1}} \cdots b_{i_{m}\alpha_{m-1}} \quad\bigl(i\in[n], \alpha_{1}, \ldots, \alpha_{m-1}\in[n]^{k-1} \bigr). $$

Definition 1.2

([22])

Let \(\mathbb{A}=(a_{i_{1}i_{2}\cdots i_{k}})\) and \(\mathbb {B}=(b_{i_{1}i_{2}\cdots i_{k}})\) be two order k dimension n tensors. We say that \(\mathbb{A}\) and \(\mathbb{B}\) are diagonal similar if there exists some invertible diagonal matrix \(D=(d_{11},d_{22},\ldots ,d_{nn})\) of order n such that \(\mathbb{B}=D^{-(k-1)}\mathbb{A}D\) with entries

$$b_{i_{1}i_{2}\cdots i_{k}}=d_{i_{1}i_{1}}^{-(k-1)}a_{i_{1}i_{2}\cdots i_{k}}d_{i_{2}i_{2}} \cdots d_{i_{k}i_{k}} . $$

Theorem 1.3

([22])

If the two orderkdimensionntensors\(\mathbb{A}\)and\(\mathbb {B}\)are diagonal similar, then they have the same eigenvalues including multiplicity and same spectral radius.

Definition 1.4

([9, 26])

Let \(\mathbb{A}\) be an order k dimensional n tensor (not necessarily nonnegative). If there exists a nonempty proper subset I of the set \([n]\), such that

$$ a_{i_{1}i_{2}\ldots i_{k}}=0 \quad\mbox{for all } i_{1}\in I \mbox{ and some } i_{j}\notin I \mbox{ where } j\in\{2,\ldots,k\}, $$

then \(\mathbb{A}\) is called weakly reducible (or sometimes I-weakly reducible). If \(\mathbb{A}\) is not weakly reducible, then \(\mathbb{A}\) is called weakly irreducible.

The ith slice of a tensor \(\mathbb{A}\) with order \(k\geq2\) and dimension n, denoted by \(\mathbb{A}_{i}\) in [23], is the subtensor of \(\mathbb{A}\) with order \(k-1\) and dimension n such that \((\mathbb{A}_{i})_{i_{2}\cdots i_{k}}=a_{ii_{2}\cdots i_{k}}\). Then the ith slice sum (also called “the ith row sum”) of \(\mathbb{A}\) is defined as

$$r_{i}(\mathbb{A})=\sum_{i_{2}, \ldots, i_{k}=1}^{n}a_{ii_{2}\cdots i_{k}}\quad \bigl(i\in[n] \bigr). $$

Lemma 1.5

([13, 25])

Let\(\mathbb{A}\)be a nonnegative tensor with order\(k\geq2\)and dimensionn. Then we have

$$ \min_{1\leq i\leq n}r_{i}(\mathbb{A})\leq\rho( \mathbb{A})\leq\max_{1\leq i\leq n}r_{i}(\mathbb{A}). $$
(1.1)

Moreover, if\(\mathbb{A}\)is weakly irreducible, then one of the equalities in (1.1) holds if and only if\(r_{1}(\mathbb{A})=r_{2}(\mathbb{A})=\cdots=r_{n}(\mathbb{A})\).

We denote by \(\binom{n}{r}\) the number of r-combinations of an n-element set, and let \(\binom{n}{r}=0\) if \(r>n\) or \(r<0\). Clearly, \(\binom{n}{r}=\frac{n!}{r!(n-r)!}\) when \(0\leq r\leq n\).

Lemma 1.6

([2])

Letn, k, andmbe positive integers. Then

  1. (1)

    \(\sum_{r=0}^{k}\binom{n}{r}\binom{m}{k-r}=\binom{n+m}{k}\) (\(n+m\geq k \));

  2. (2)

    \(\binom{n}{k}=\frac{n}{k}\binom{n-1}{k-1}\) (\(n\geq k \geq1 \)).

Let \(S=\{s_{1}, s_{2},\ldots, s_{n}\}\) be an n-element set, noting that \(s_{i}\neq s_{j}\) if \(i\neq j\).

Definition 1.7

Let \(n\geq2\), \(k\geq2\), \(\mathbb{A}\) be an order k dimension n tensor, we call \(\mathbb{A}\) a k-uniform tensor if its entries are defined as follows: \(a_{i_{1}i_{2}\cdots i_{k}} \in\mathbb{R}\) if \(\{i_{1}, i_{2}, \ldots, i_{k}\}\) is a k-element set or \(i_{1}=i_{2}=\cdots=i_{k}\), otherwise, \(a_{i_{1}i_{2}\cdots i_{k}}=0\).

Obviously, a 2-uniform tensor is an ordinary matrix. Let \(\mathbb {A}\) be a k-uniform tensor with order k dimension n. Then \(a_{i_{1}i_{2}\cdots i_{k}}\neq0\) implies \(\{i_{1}, i_{2},\ldots, i_{k}\} \) is a k-element set or \(i_{1}=i_{2}=\cdots=i_{k}\).

In this paper, we obtain a sharp upper bound on the spectral radius of a nonnegative k-uniform tensor in Sect. 2. By applying the bound to a nonnegative matrix, we can obtain the main result in [7]. In Sect. 3, we apply the bound to the adjacency spectral radius and signless Laplacian spectral radius of a uniform hypergraph and improve some known results in [4]. Furthermore, we give a characterization of a strongly connected k-uniform directed hypergraph and obtain some new results by applying the bound to the adjacency spectral radius and the signless Laplacian spectral radius of a uniform directed hypergraph in Sect. 4.

Main results

In this section, we obtain a sharp upper bound on the spectral radius of a nonnegative k-uniform tensor and characterize when this bound is achieved. Furthermore, this bound deduces the main result in [7] for a nonnegative matrix.

Theorem 2.1

Let\(n\geq2\), \(k\geq2\), \(\mathbb{A}=(a_{i_{1}i_{2}\cdots i_{k}})\)be a nonnegativek-uniform tensor with orderkdimensionn, \(r_{i}=r_{i}(\mathbb{A})=\sum_{i_{2},\ldots, i_{k}=1}^{n}a_{ii_{2}\cdots i_{k}}\)for\(i\in[n]\)with\(r_{1}\geq r_{2}\geq\cdots\geq r_{n}\). LetMbe the largest diagonal element andN (>0) be the largest non-diagonal element of tensor\(\mathbb{A}\), \(N_{1}=N(k-2)!\binom{n-2}{k-2}\), \(\phi_{1}=r_{1}\), and

$$ \phi_{s}=\frac{1}{2} \Biggl\{ r_{s}+M-N_{1}+ \sqrt {(r_{s}-M+N_{1})^{2}+4N_{1} \sum_{t=1}^{s-1}(r_{t}-r_{s})} \Biggr\} $$
(2.1)

for\(2\leq s\leq n\). Then

$$\rho(\mathbb{A})\leq\min_{1\leq s\leq n}\phi_{s}. $$

Let\(\phi_{s}=\min_{1\leq l\leq n}\phi_{l}\). If\(\mathbb{A}\)is weakly irreducible, then

  1. (1)

    when\(k=2\), \(\rho(\mathbb{A})=\phi_{s}\)if and only if\(r_{1}=r_{2}=\cdots=r_{n}\)or for somet (\(2\leq t\leq s\)), \(\mathbb{A}\) satisfies the following conditions:

    1. (i)

      \(a_{ii}=M\)for\(1\leq i\leq t-1\);

    2. (ii)

      \(a_{ii_{2}}=N\)for\(1\leq i\leq n\), \(1\leq i_{2}\leq t-1\), and\(i\neq i_{2}\);

    3. (iii)

      \(r_{t}=r_{t+1}=\cdots=r_{n}\);

  2. (2)

    when\(k\geq3\), \(\rho(\mathbb{A})=\phi_{s}\)if and only if\(r_{1}=r_{2}=\cdots=r_{n}\).

Proof

Firstly, we show \(\rho(\mathbb{A})\leq\phi_{s}\) for \(1\leq s\leq n\).

If \(s=1\), then by Lemma 1.5 we have \(\rho(\mathbb{A})\leq r_{1}=\phi_{1}\). Now we only consider the cases of \(2\leq s\leq n\).

Let

$$U=\operatorname{diag}(x_{1},\ldots,x_{s-1},x_{s}, \ldots,x_{n}), $$

where \(x_{i}>0\) for \(1\leq i\leq n\), \(x_{i}^{k-1}=1+\frac{r_{i}-r_{s}}{\phi _{s}+N_{1}-M}\) for \(1\leq i\leq s-1\), and \(x_{s}=\cdots=x_{n}=1\).

Now we show \(x_{i}\geq1\) for \(1\leq i\leq s-1\). By \(r_{1}\geq r_{2}\geq \cdots\geq r_{n}\), we only need to show \(\phi_{s}+N_{1}-M>0\).

If \(\sum_{t=1}^{s-1}(r_{t}-r_{s})>0\), then by (2.1) we have

$$\phi_{s}>\frac{1}{2} \bigl(r_{s}+M-N_{1}+ \vert r_{s}-M+N_{1} \vert \bigr)\geq \frac {1}{2} \bigl(r_{s}+M-N_{1}-(r_{s}-M+N_{1}) \bigr)=M-N_{1}, $$

and thus \(\phi_{s}-M+N_{1}>0\).

If \(\sum_{t=1}^{s-1}(r_{t}-r_{s})=0\), then \(r_{1}=r_{2}=\cdots=r_{s}\). Thus \(\phi_{s}-M+N_{1}>0\) by \(r_{1}\geq M\) and \(\phi_{s}=r_{s}\) from (2.1).

Combining the above arguments, we know \(x_{i}\geq1\), and then U is an invertible diagonal matrix. Let \(\mathbb{B}=U^{-(k-1)}\mathbb{A}U=(b_{i_{1}\cdots i_{k}})\). By Theorem 1.3, we have

$$ \rho(\mathbb{A})=\rho(\mathbb{B}). $$
(2.2)

By (2.1), it is easy to see that

$$\phi_{s}^{2}-(r_{s}+M-N_{1}) \phi_{s}+(M-N_{1})r_{s}-N_{1}\sum _{t=1}^{s-1}(r_{t}-r_{s}) =0. $$

Then

$$\begin{aligned} (\phi_{s}-M+N_{1}) (\phi_{s}-r_{s})&=N_{1} \sum_{t=1}^{s-1}(r_{t}-r_{s}) =N_{1}\sum_{t=1}^{s-1}( \phi_{s}-M+N_{1}) \bigl(x_{t}^{k-1}-1 \bigr) \\&=N_{1}(\phi_{s}-M+N_{1}) \Biggl(\sum _{t=1}^{s-1}x_{t}^{k-1}-(s-1) \Biggr).\end{aligned} $$

Therefore, \(\phi_{s}=r_{s}+ N_{1}\sum_{t=1}^{s-1}x_{t}^{k-1}-N_{1}(s-1)\) and thus

$$ \sum_{t=1}^{s-1}x_{t}^{k-1}= \frac{\phi_{s}-r_{s}+(s-1)N_{1}}{N_{1}}. $$
(2.3)

In the following we show \(r_{i}(\mathbb{B})\leq\phi_{s}\) for any \(i\in [n]=\{1,2,\ldots,n\}\).

Let \(S(\mathbb{A})=\{\{i, i_{2}, \ldots, i_{k}\}|a_{ii_{2}\cdots i_{k}}\neq0\}\). Since M is the largest diagonal element and \(N>0\) is the largest non-diagonal element of tensor \(\mathbb{A}\), by Definition 1.2, we have

$$\begin{aligned} r_{i}(\mathbb{B}) ={}&r_{i} \bigl(U^{-(k-1)}\mathbb{A}U \bigr) \\ ={}&\sum_{i_{2},\ldots, i_{k}=1}^{n} \bigl(U^{-(k-1)} \bigr)_{ii}a_{ii_{2}\cdots i_{k}}U_{i_{2}i_{2}}\cdots U_{i_{k}i_{k}} \\ ={}&\frac{1}{x_{i}^{k-1}}\sum_{i_{2},\ldots,i_{k}=1}^{n}a_{ii_{2}\cdots i_{k}}x_{i_{2}} \cdots x_{i_{k}} \\ ={}&\frac{1}{x_{i}^{k-1}} \Biggl\{ r_{i}+\sum_{i_{2},\ldots ,i_{k}=1}^{n}a_{ii_{2}\cdots i_{k}}(x_{i_{2}} \cdots x_{i_{k}}-1) \Biggr\} \\ =&\frac{1}{x_{i}^{k-1}} \Biggl\{ r_{i}+a_{i\cdots i} \bigl(x_{i}^{k-1}-1 \bigr) \\ &+\sum_{i_{2},\ldots,i_{k}=1}^{n}a_{ii_{2}\cdots i_{k}}(x_{i_{2}} \cdots x_{i_{k}}-1)-a_{i\cdots i} \bigl(x_{i}^{k-1}-1 \bigr) \Biggr\} \\ \leq{}&\frac{1}{x_{i}^{k-1}} \Biggl\{ r_{i}+M \bigl(x_{i}^{k-1}-1 \bigr) \\ &+\sum_{i_{2},\ldots,i_{k}=1}^{n}a_{ii_{2}\cdots i_{k}}(x_{i_{2}} \cdots x_{i_{k}}-1)-a_{i\cdots i} \bigl(x_{i}^{k-1}-1 \bigr) \Biggr\} \\ \leq{}&\frac{1}{x_{i}^{k-1}} \biggl\{ r_{i}+M \bigl(x_{i}^{k-1}-1 \bigr) +N(k-1)!\sum_{\{i,i_{2},\ldots, i_{k}\}\in S(\mathbb {A})}(x_{i_{2}}\cdots x_{i_{k}}-1) \biggr\} \\ \leq{}&\frac{1}{x_{i}^{k-1}} \biggl\{ r_{i}+M \bigl(x_{i}^{k-1}-1 \bigr) \\ &+N(k-1)!\sum_{\{i,i_{2},\ldots, i_{k}\}\in S(\mathbb{A})} \biggl(\frac {x_{i_{2}}^{k-1}+\cdots+x_{i_{k}}^{k-1}}{k-1}-1 \biggr) \biggr\} \\ \leq{}& \frac{1}{x_{i}^{k-1}} \Biggl\{ r_{i}+M \bigl(x_{i}^{k-1}-1 \bigr) \\ &+N(k-1)!\sum_{r=0}^{k-1}\sum _{ \{i_{2},\ldots, i_{k}\}\in N_{r}^{s}} \biggl(\frac{x_{i_{2}}^{k-1}+\cdots +x_{i_{k}}^{k-1}}{k-1}-1 \biggr) \Biggr\} \\ ={}&\frac{1}{x_{i}^{k-1}} \Biggl\{ r_{i}+M \bigl(x_{i}^{k-1}-1 \bigr) \\ &+N(k-1)!\sum_{r=0}^{k-2}\sum _{ \{i_{2},\ldots, i_{k}\}\in N_{r}^{s}} \biggl(\frac{x_{i_{2}}^{k-1}+\cdots +x_{i_{k}}^{k-1}}{k-1}-1 \biggr) \Biggr\} , \end{aligned}$$

where \(N_{r}^{s}=\{\{i_{2},\ldots,i_{k}\}\mid i_{2},\ldots,i_{k}\in\{ 1,2,\ldots, n\}\setminus\{i\}\), and there are exactly r elements in \(\{i_{2},\ldots,i_{k}\}\) such that they are not less than s} for \(0\leq r\leq k-1\). Obviously, the family of all \((k-1)\)-element subsets of \(\{1,2,\ldots, n\}\setminus\{i\}\) is just equal to \(\bigcup_{r=0}^{k-1}N_{r}^{s}\). Thus we have

$$ r_{i}(\mathbb{B})\leq M+\frac{1}{x_{i}^{k-1}} \Biggl\{ r_{i}-M+N(k-1)!\sum_{r=0}^{k-2} \sum_{ \{i_{2},\ldots, i_{k}\}\in N_{r}^{s}} \biggl(\frac{x_{i_{2}}^{k-1}+\cdots +x_{i_{k}}^{k-1}}{k-1}-1 \biggr) \Biggr\} , $$
(2.4)

and the equality holds in (2.4) if and only if (a), (b), (c), and (d) hold:

  1. (a)

    \(x_{i}^{k-1}=1\) or \(a_{i\cdots i}=M\) for \(x_{i}>1\);

  2. (b)

    for any \(\{i,i_{2},\ldots,i_{k}\}\in S(\mathbb{A})\), \(x_{i_{2}}\cdots x_{i_{k}}=1\) or \(a_{ii_{2}\cdots i_{k}}=N\) for \(x_{i_{2}}\cdots x_{i_{k}}>1\);

  3. (c)

    \(x_{i_{2}}=\cdots=x_{i_{k}}\) for any \(\{i,i_{2},\ldots,i_{k}\}\in S(\mathbb{A})\);

  4. (d)

    \(\sum_{\{i,i_{2},\ldots, i_{k}\}\in S(\mathbb{A})} (\frac {x_{i_{2}}^{k-1}+\cdots+x_{i_{k}}^{k-1}}{k-1}-1 )=\sum_{r=0}^{k-1}\sum_{ \{i_{2},\ldots, i_{k}\}\in N_{r}^{s}} (\frac {x_{i_{2}}^{k-1}+\cdots+x_{i_{k}}^{k-1}}{k-1}-1 )\).

Case 1:\(s\leq i\leq n\).

Clearly, \(\{i_{2},\ldots, i_{k}\}\in N_{r}^{s}\) implies that we should choose r elements from the set \(\{s,\ldots, n\}\setminus\{i\}\) and choose \(k-1-r\) elements from the set \(\{1,2,\ldots, s-1\}\), then we have

$$ \sum_{r=0}^{k-2}\sum _{\{i_{2},\ldots, i_{k}\}\in N_{r}^{s}} 1=\sum_{r=0}^{k-2} \binom{s-1}{k-1-r}\binom{n-s}{r}. $$
(2.5)

Similarly, we have

$$ \begin{aligned}[b] &\sum_{r=0}^{k-2}\sum _{\{i_{2},\ldots, i_{k}\}\in N_{r}^{s}} \bigl(x_{i_{2}}^{k-1}+ \cdots+x_{i_{k}}^{k-1} \bigr) \\ &\quad=\sum_{r=0}^{k-2} \binom{s-2}{k-2-r}\binom{n-s}{r} \Biggl(\sum_{t=1}^{s-1}x_{t}^{k-1} \Biggr)\\&\qquad+\sum_{r=0}^{k-2} \binom{s-1}{k-1-r}\binom{n-s-1}{r-1} \Biggl(\sum_{t=s}^{n}x_{t}^{k-1}-x_{i}^{k-1} \Biggr).\end{aligned} $$
(2.6)

We note \(x_{s}=\cdots=x_{n}=1\) and \(r_{1}\geq\cdots\geq r_{s}\geq\cdots \geq r_{i}\geq\cdots\geq r_{n}\), then by (2.3), (2.4), (2.5), and (2.6), we have

$$\begin{aligned} r_{i}(\mathbb{B}) \leq {}&r_{i}+N(k-1)!\sum _{r=0}^{k-2}\sum_{ \{i_{2},\ldots, i_{k}\}\in N_{r}^{s}} \biggl(\frac{x_{i_{2}}^{k-1}+\cdots+x_{i_{k}}^{k-1}}{k-1}-1 \biggr) \\ \leq{}&r_{s}+N(k-2)!\sum_{r=0}^{k-2} \binom{s-2}{k-2-r}\binom{n-s}{r} \Biggl(\sum_{t=1}^{s-1}x_{t}^{k-1} \Biggr) \\ &+N(k-2)!\sum_{r=0}^{k-2}\binom{s-1}{k-1-r} \binom{n-s-1}{r-1} \Biggl(\sum_{t=s}^{n}x_{t}^{k-1}-x_{i}^{k-1} \Biggr) \\ &-N(k-1)!\sum_{r=0}^{k-2}\binom{s-1}{k-1-r} \binom{n-s}{r} \\ ={}& r_{s}+N(k-2)!\binom{n-2}{k-2}\sum_{t=1}^{s-1}x_{t}^{k-1} +N(k-2)!\sum_{r=0}^{k-2}\binom{s-1}{k-1-r} \binom{n-s-1}{r-1}(n-s) \\ & -N(k-1)!\sum_{r=0}^{k-2}\binom{s-1}{k-1-r} \binom{n-s}{r} \\ ={}& r_{s}+N_{1}\sum_{t=1}^{s-1}x_{t}^{k-1} \\ &+N(k-2)!\sum_{r=0}^{k-2}\binom{s-1}{k-1-r} \biggl[\binom{n-s-1}{r-1}(n-s) -(k-1)\binom{n-s}{r} \biggr] \\ ={}& r_{s}+N_{1}\sum_{t=1}^{s-1}x_{t}^{k-1}-N(k-2)! \sum_{r=0}^{k-2}\binom {s-1}{k-1-r} \binom{n-s}{r}(k-1-r) \\ ={}& r_{s}+N_{1}\sum_{t=1}^{s-1}x_{t}^{k-1}-N(k-2)! \sum_{r=0}^{k-2}(s-1)\binom{s-2}{k-2-r} \binom{n-s}{r} \\ ={}& r_{s}+N_{1}\sum _{t=1}^{s-1}x_{t}^{k-1}-N(k-2)!(s-1) \binom{n-2}{k-2} \\ = {}&r_{s}+N_{1}\sum _{t=1}^{s-1}x_{t}^{k-1}-(s-1)N_{1} \\ ={}&\phi_{s}, \end{aligned}$$

where equality holds if and only if the following condition (e) holds: (e) \(r_{i}=r_{s}\).

Case 2:\(1\leq i\leq s-1\).

Subcase 2.1:\(s\geq3\).

Clearly, \(\{i_{2},\ldots, i_{k}\}\in N_{r}^{s}\) implies that we should choose r elements from the set \(\{s,\ldots, n\}\) and choose \(k-1-r\) elements from the set \(\{1,2,\ldots, s-1\}\setminus \{i\}\), then \(\sum_{r=0}^{k-2}\sum_{\{i_{2},\ldots, i_{k}\}\in N_{r}^{s}} 1=\sum_{r=0}^{k-2}\binom{s-2}{k-1-r}\binom{n-s+1}{r}\). Similarly, we have

$$\begin{aligned} &\sum_{r=0}^{k-2}\sum _{ \{i_{2},\ldots, i_{k}\}\in N_{r}^{s}} \bigl(x_{i_{2}}^{k-1}+ \cdots+x_{i_{k}}^{k-1} \bigr) \\ &\quad=\sum_{r=0}^{k-2}\binom{s-3}{k-r-2} \binom{n-s+1}{r} \Biggl(\sum_{t=1}^{s-1}x_{t}^{k-1}-x_{i}^{k-1} \Biggr) \\ &\qquad+\sum_{r=0}^{k-2}\binom{s-2}{k-1-r} \binom{n-s}{r-1} \Biggl(\sum_{t=s}^{n}x_{t}^{k-1} \Biggr) \\ &\quad=\binom{n-2}{k-2} \Biggl(\sum_{t=1}^{s-1}x_{t}^{k-1}-x_{i}^{k-1} \Biggr) +\sum_{r=0}^{k-2}\binom{s-2}{k-1-r} \binom{n-s}{r-1}(n-s+1). \end{aligned}$$

Then

$$\begin{aligned} &N(k-1)!\sum_{r=0}^{k-2}\sum _{ \{i_{2},\ldots, i_{k}\}\in N_{r}^{s}} \biggl(\frac{x_{i_{2}}^{k-1}+\cdots +x_{i_{k}}^{k-1}}{k-1}-1 \biggr) \\ &\quad=N_{1} \Biggl(\sum_{t=1}^{s-1}x_{t}^{k-1}-x_{i}^{k-1} \Biggr) +N(k-2)!\sum_{r=0}^{k-2} \binom{s-2}{k-1-r}\binom{n-s}{r-1}(n-s+1) \\ &\qquad-N(k-1)!\sum_{r=0}^{k-2}\binom{s-2}{k-1-r} \binom{n-s+1}{r} \\ &\quad=N_{1} \Biggl(\sum_{t=1}^{s-1}x_{t}^{k-1}-x_{i}^{k-1} \Biggr) -N(k-2)!\sum_{r=0}^{k-2}(k-1-r) \binom{s-2}{k-1-r}\binom{n-s+1}{r} \\ &\quad=N_{1} \Biggl(\sum_{t=1}^{s-1}x_{t}^{k-1}-x_{i}^{k-1} \Biggr) -N(k-2)!\sum_{r=0}^{k-2}(s-2) \binom{s-3}{k-r-2}\binom{n-s+1}{r} \\ &\quad=N_{1} \Biggl(\sum_{t=1}^{s-1}x_{t}^{k-1}-x_{i}^{k-1} \Biggr) -N(k-2)!(s-2)\binom{n-2}{k-2} \\ &\quad=N_{1} \Biggl(\sum_{t=1}^{s-1}x_{t}^{k-1}-x_{i}^{k-1} \Biggr)-(s-2)N_{1}. \end{aligned}$$

Thus, by (2.3), (2.4), and the definition of \(x_{i}^{k-1}\) for \(1\leq i\leq s-1\), we have

$$\begin{aligned} r_{i}(\mathbb{B}) &\leq M+\frac{1}{x_{i}^{k-1}} \Biggl\{ r_{i}-M+N_{1} \Biggl(\sum_{t=1}^{s-1}x_{t}^{k-1}-x_{i}^{k-1} \Biggr)-(s-2)N_{1} \Biggr\} \\ &=M-N_{1}+\frac{1}{x_{i}^{k-1}} \Biggl\{ r_{i}-M+N_{1} \sum_{t=1}^{s-1}x_{t}^{k-1}-(s-2)N_{1} \Biggr\} \\ &=\phi_{s}. \end{aligned}$$

Subcase 2.2:\(s=2\).

In this case, we need to show \(r_{1}(\mathbb{B})\leq\phi_{2}\). Noting that \(x_{2}=\cdots=x_{n}=1\), by (2.4) and the definition of \(N_{r}^{2}\), we have

$$\begin{aligned} r_{1}(\mathbb{B}) \leq{}& M+\frac{1}{x_{1}^{k-1}} \Biggl\{ r_{1}-M+N(k-1)!\sum _{r=0}^{k-2}\sum_{\{i_{2},\ldots, i_{k}\}\in N_{r}^{2}} \biggl(\frac{x_{i_{2}}^{k-1}+\cdots +x_{i_{k}}^{k-1}}{k-1}-1 \biggr) \Biggr\} \\ ={}&M+\frac{1}{x_{1}^{k-1}}(r_{1}-M). \end{aligned}$$

By (2.3), we have \(x_{1}^{k-1}=\frac{\phi_{2}-r_{2}+N_{1}}{N_{1}}\). Then, by (2.1) and the definition of \(\phi_{2}\), we have

$$\begin{aligned} &\frac{1}{x_{1}^{k-1}}(r_{1}-M) \\ &\quad=\frac{N_{1}(r_{1}-M)}{\phi_{2}-r_{2}+N_{1}} \\ &\quad=\frac{2N_{1}(r_{1}-M)}{N_{1}+M-r_{2}+\sqrt{(N_{1}-M+r_{2})^{2}+4N_{1}(r_{1}-r_{2})}} \\ &\quad=\frac{2N_{1}(r_{1}-M)(N_{1}+M-r_{2}-\sqrt {(N_{1}-M+r_{2})^{2}+4N_{1}(r_{1}-r_{2})})}{(N_{1}+M-r_{2})^{2}-((N_{1}-M+r_{2})^{2}+4N_{1}(r_{1}-r_{2}))} \\ &\quad=-\frac{N_{1}+M-r_{2}-\sqrt{(N_{1}-M+r_{2})^{2}+4N_{1}(r_{1}-r_{2})}}{2}. \end{aligned}$$

Thus

$$\begin{aligned} r_{1}(\mathbb{B})\leq M+\frac{1}{x_{1}^{k-1}}(r_{1}-M)= \phi_{2}. \end{aligned}$$

Combining Subcases 2.1 and 2.2, we have \(r_{i}(\mathbb{B})\leq\phi_{s}\) for \(1\leq i\leq s-1\), and combining Cases 1 and 2, we have \(r_{i}(\mathbb{B})\leq\phi_{s}\) for \(1\leq i\leq n\). Then \(\rho(\mathbb{A})=\rho(\mathbb{B})\leq\max_{1\leq i\leq n}r_{i}(\mathbb{B})\leq\phi_{s}\) for \(2\leq s\leq n\) by (2.2) and Lemma 1.5.

Therefore, we know \(\rho(\mathbb{A})\leq\phi_{s}\) for \(1\leq s\leq n\) and thus \(\rho(\mathbb{A})\leq\min_{1\leq s\leq n}\phi_{s}\).

Now suppose that \(\mathbb{A}\) is weakly irreducible. Then \(\mathbb{B}\) is also weakly irreducible by \(\mathbb{B}=U^{-(k-1)}\mathbb{A}U\). Let \(\phi_{s}=\min_{1\leq l\leq n}\phi_{l}\).

Case 1:\(s=1\).

By Lemma 1.5 and the fact \(r_{1}=\max_{1\leq i\leq n}r_{i}\), we have \(\rho(\mathbb{A})= \phi_{1}\) if and only if \(r_{1}=r_{2}=\cdots=r_{n}\).

Case 2:\(2\leq s\leq n\).

Then \(\rho(\mathbb{B})= \max_{1\leq i\leq n}r_{i}(\mathbb{B})\) and thus \(r_{1}(\mathbb{B})=r_{2}(\mathbb{B})=\cdots=r_{n}(\mathbb{B})=\phi_{s}\) by \(\phi_{s}=\rho(\mathbb{A})=\rho(\mathbb{B})\leq\max_{1\leq i\leq n}r_{i}(\mathbb{B})\leq\phi_{s}\) and Lemma 1.5. Therefore, (a), (b), (c), and (d) hold for any \(i\in[n]\), (e) holds for any \(i\in\{s,\ldots, n\}\).

Subcase 2.1:\(r_{1}=r_{s}\).

By \(r_{1}\geq r_{2}\geq\cdots\geq r_{n}\) and (e) \(r_{i}=r_{s}\) for \(s\leq i\leq n\), then we have \(r_{1}= r_{2}=\cdots=r_{n}\).

Subcase 2.2:\(r_{1}>r_{s}\).

Let t be the smallest integer such that \(r_{t}=r_{s}\) for \(1< t\leq s\). Since \(r_{s}=r_{s+1}=\cdots=r_{n}\), we have \(r_{t}=r_{t+1}=\cdots =r_{n}\) and \(x_{i}>1\) for \(i=1,2,\ldots,t-1\).

When \(k\geq3\), (c) and (d) cannot hold at the same time. Because there are r elements in \(\{i_{2},\ldots, i_{k}\}\) chosen from \(\{s,\ldots, n\}\) and \(k-1-r\) elements in \(\{i_{2},\ldots, i_{k}\}\) chosen from \(\{1,\ldots, s-1\} \), and then \(x_{i_{2}}=\cdots=x_{i_{k}}\) cannot hold when \(1\leq r\leq k-2\). Thus we only consider the case of \(k=2\).

In the case of \(k=2\), (d) implies

$$\sum_{\{i,i_{2}\}\in S(\mathbb{A})} (x_{i_{2}}-1 )=\sum _{r=0}^{1}\sum_{ \{i_{2}\}\in N_{r}^{s}} (x_{i_{2}}-1 )=\sum_{\substack{{i_{2}=1}\\ i_{2}\neq i}}^{t-1} (x_{i_{2}}-1 ). $$

Then (i)–(iii) follow from (a), (b), (c), (d) for \(1\leq i\leq n\), and (e) for \(s\leq i\leq n\), and thus (1) and (2) hold.

Conversely, if \(r_{1}=r_{2}=\cdots=r_{n}\), then by Lemma 1.5, \(\rho(\mathbb{A})=\phi_{1}=r_{1}\). If \(k=2\) and (i)–(iii) hold, then (a), (b), (c), and (d) hold for \(1\leq i\leq n\), (e) holds for \(s\leq i\leq n\). Then we have \(r_{i}(\mathbb{B})=\phi_{s}\) for \(1\leq i\leq n\). Therefore, by Lemma 1.5, we have \(\rho(\mathbb{A})=\rho(\mathbb {B})=\max_{1\leq i\leq n}r_{i}(\mathbb{B})=\phi_{s}\) for \(s=2,\ldots,n\). □

Let \(k=2\). Then \(\mathbb{A}\) is a matrix, weak irreducibility for tensors corresponds to irreducibility for matrices, and slice sum for tensors corresponds to row sum for matrices. The following result follows immediately.

Corollary 2.2

([7], Theorem 2.1)

LetAbe an\(n\times n\)nonnegative matrix with row sums\(r_{1},r_{2},\ldots,r_{n}\), where\(r_{1}\geq r_{2}\geq \cdots\geq r_{n}\). LetMbe the largest diagonal element andNbe the largest non-diagonal element ofA. Suppose that\(N>0\). Let\(\phi _{1}=r_{1}\)and, for\(2\leq s\leq n\),

$$ \phi_{s}=\frac{1}{2} \Biggl(r_{s}+M-N+ \sqrt{(r_{s}-M+N)^{2}+4N \sum _{t=1}^{s-1}(r_{t}-r_{s})} \Biggr). $$
(2.7)

Then\(\rho(A)\leq\min_{1\leq s\leq n}\phi_{s}\).

Let\(\phi_{s}=\min_{1\leq l\leq n}\phi_{l}\). IfAis irreducible, then\(\rho(A)=\phi_{s}\)if and only if\(r_{1}=r_{2}=\cdots=r_{n}\)or for somet (\(2\leq t\leq s\)), Asatisfies the following conditions:

  1. (i)

    \(a_{ii}=M\)for\(1\leq i\leq t-1\);

  2. (ii)

    \(a_{ii_{2}}=N\)for\(1\leq i\leq s-1\)and\(1\leq i_{2}\leq t-1\)with\(i\neq i_{2}\);

  3. (iii)

    \(r_{t}=\cdots=r_{n}\);

  4. (iv)

    \(a_{ii_{2}}=N\)for\(s\leq i\leq n\)and\(1\leq i_{2}\leq t-1\).

Applications to a k-uniform hypergraph

A hypergraph is a natural generalization of an ordinary graph [1].

A hypergraph \(\mathcal{H}=(V(\mathcal{H}), E(\mathcal{H}))\) on n vertices is a set of vertices, say, \(V(\mathcal{H})=\{1,2,\ldots,n\}\) and a set of edges, say, \(E(\mathcal{H})=\{e_{1},e_{2},\ldots,e_{m}\}\), where \(e_{i}=\{i_{1},i_{2},\ldots,i_{l}\}\), \(i_{j}\in[n]\), \(j=1,2,\ldots,l\). Let \(k\geq2\), if \(\mid e_{i}\mid=k\) for any \(i=1,2,\ldots,m\), then \(\mathcal{H}\) is called a k-uniform hypergraph. When \(k=2\), then \(\mathcal{H}\) is an ordinary graph. The degree \(d_{i}\) of vertex i is defined as \(d_{i}=|\{e_{j}:i\in e_{j}\in E(\mathcal{H})\}|\). If \(d_{i}=d\) for any vertex i of a hypergraph \(\mathcal{H}\), then \(\mathcal{H}\) is called d-regular. A walkW of length in \(\mathcal{H}\) is a sequence of alternate vertices and edges: \(v_{0},e_{1},v_{1},e_{2},\ldots,e_{\ell},v_{\ell}\), where \(\{v_{i},v_{i+1}\}\subseteq e_{i+1}\) for \(i=0,1,\ldots,\ell-1\). The hypergraph \(\mathcal{H}\) is said to be connected if every two vertices are connected by a walk.

Definition 3.1

([6, 18])

Let \(\mathcal{H}=(V(\mathcal{H}),E(\mathcal{H}))\) be a k-uniform hypergraph on n vertices. The adjacency tensor of \(\mathcal{H}\) is defined as the order k dimension n tensor \(\mathbb{A}(\mathcal{H})\), whose \((i_{1}i_{2}\cdots i_{k})\)-entry is

$$\bigl(\mathbb{A}(\mathcal{H}) \bigr)_{i_{1}i_{2}\cdots i_{k}}=\textstyle\begin{cases} \frac{1}{(k-1)!}, & \mbox{if } \{i_{1},i_{2},\ldots,i_{k}\}\in E(\mathcal{H}), \\ 0, & \mbox{otherwise} . \end{cases} $$

Let \(\mathbb{D}(\mathcal{H})\) be an order k dimension n diagonal tensor with its diagonal entry \(\mathbb{D}_{ii\cdots i}\) being \(d_{i}\), the degree of vertex i for all \(i\in V(\mathcal{H})=[n]\). Then \(\mathbb{Q}(\mathcal{H})=\mathbb{D(\mathcal{H})}+ \mathbb {A(\mathcal{H})} \) is called the signless Laplacian tensor of the hypergraph \(\mathcal{H}\). Clearly, the adjacency tensor and the signless Laplacian tensor of a k-uniform hypergraph \(\mathcal{H}\) are nonnegative symmetric k-uniform tensors and, for any \(1\leq i\leq n\),

$$r_{i} \bigl(\mathbb{A(\mathcal{H})} \bigr)=\sum _{i_{2},\ldots, i_{k}=1}^{n} \bigl(\mathbb {A(\mathcal{H})} \bigr)_{ii_{2}\cdots i_{k}} =d_{i}, r_{i} \bigl(\mathbb{Q( \mathcal{H})} \bigr)=\sum_{i_{2},\ldots, i_{k}=1}^{n} \bigl( \mathbb {Q(\mathcal{H})} \bigr)_{ii_{2}\cdots i_{k}} =2d_{i}. $$

It was proved in [9, 20] that a k-uniform hypergraph \(\mathcal{H}\) is connected if and only if its adjacency tensor \(\mathbb{A}(\mathcal{H})\) (and thus the signless Laplacian tensor \(\mathbb{Q}(\mathcal{H})\)) is weakly irreducible.

Recently, several papers studied the spectral radii of the adjacency tensor \(\mathbb{A}(\mathcal{H})\) and the signless Laplacian tensor \(\mathbb{Q}(\mathcal{H})\) of a k-uniform hypergraph \(\mathcal{H}\) (see [4, 6, 18, 19, 27, 28] and so on). In this section, we apply Theorem 2.1 to the adjacency tensor \(\mathbb{A}(\mathcal{H})\) and the signless Laplacian tensor \(\mathbb{Q}(\mathcal{H})\) of a k-uniform hypergraph \(\mathcal{H}\). If \(k=2\), we obtain Theorem 3.1 and Theorem 4.2 in [7]. If \(k\geq3\), we improve some known results about the bounds of \(\rho (\mathbb{A}(\mathcal{H}))\) and \(\rho(\mathbb{Q}(\mathcal{H}))\) in [4].

Theorem 3.2

Let\(k\geq3\), \(\mathcal{H}\)be ak-uniform hypergraph with degree sequence\(d_{1}\geq\cdots\geq d_{n}\), \(\mathbb{ A}(\mathcal{H})\)be the adjacency tensor of\(\mathcal{H}\). Let\(A_{1}=\frac{1}{k-1}\binom{n-2}{k-2}\), \(\phi_{1}=d_{1}\), and

$$ \phi_{s}=\frac{1}{2} \Biggl\{ d_{s}-A_{1}+ \sqrt {(d_{s}+A_{1})^{2}+4A_{1} \sum_{t=1}^{s-1}(d_{t}-d_{s})} \Biggr\} $$
(3.1)

for\(2\leq s\leq n\). Then

$$\begin{aligned} \rho \bigl(\mathbb{A}(\mathcal{H}) \bigr)&\leq\min _{1\leq s\leq n}\phi_{s}. \end{aligned}$$
(3.2)

If\(\mathcal{H}\)is connected, then the equality in (3.2) holds if and only if\(\mathcal{H}\)is regular.

Proof

Let \(\mathbb{A}=\mathbb{A}(\mathcal{H})\). We apply Theorem 2.1 to \(\mathbb{A}(\mathcal{H})\), then we have \(M=0\), \(N=\frac{1}{(k-1)!}\), \(r_{i}=d_{i}\) for \(1\leq i\leq n\), \(A_{1}=N_{1}\), and (3.1) is from (2.1). Thus (3.2) holds by Theorem 2.1.

If \(\mathcal{H}\) is connected, then by Theorem 2.1 the equality in (3.2) holds if and only if \(r_{1}(\mathbb{A(\mathcal{H})})=r_{2}(\mathbb{A(\mathcal{H})})=\cdots =r_{n}(\mathbb{A(\mathcal{H})})\), which says exactly that \(\mathcal{H}\) is regular, since \(r_{i}(\mathbb{A(\mathcal{H})})=d_{i}\) for any \(1\leq i\leq n\). □

Theorem 3.3

Let\(k\geq3\), \(\mathcal{H}\)be ak-uniform hypergraph with degree sequence\(d_{1}\geq\cdots\geq d_{n}\), \(\mathbb{ Q}(\mathcal{H})\)be the signless Laplacian tensor of\(\mathcal {H}\). Let\(A_{1}=\frac{1}{k-1}\binom{n-2}{k-2}\), \(\psi_{1}=2d_{1}\), and

$$ \psi_{s}=\frac{1}{2} \Biggl\{ 2d_{s}+d_{1}-A_{1}+ \sqrt {(2d_{s}-d_{1}+A_{1})^{2}+8A_{1} \sum_{t=1}^{s-1}(d_{t}-d_{s})} \Biggr\} $$
(3.3)

for\(2\leq s\leq n\). Then

$$ \rho \bigl(\mathbb{Q}(\mathcal{H}) \bigr)\leq\min _{1\leq s\leq n}\psi_{s}. $$
(3.4)

If\(\mathcal{H}\)is connected, then the equality in (3.4) holds if and only if\(\mathcal{H}\)is regular.

Proof

Let \(\mathbb{A}=\mathbb{Q}(\mathcal{H})\). We apply Theorem 2.1 to \(\mathbb{Q}(\mathcal{H})\), then we have \(M=d_{1}\), \(N=\frac{1}{(k-1)!}\), \(r_{i}=2d_{i}\) for \(1\leq i\leq n\), \(A_{1}=N_{1}\), and (3.3) is from (2.1). Thus (3.4) holds by Theorem 2.1.

If \(\mathcal{H}\) is connected, then by Theorem 2.1 the equality in (3.4) holds if and only if \(r_{1}(\mathbb{Q(\mathcal{H})})=r_{2}(\mathbb{Q(\mathcal{H})})=\cdots =r_{n}(\mathbb{Q(\mathcal{H})})\), which says exactly that \(\mathcal{H}\) is regular, since \(r_{i}(\mathbb{Q(\mathcal{H})})=2d_{i}\) for any \(1\leq i\leq n\). □

Applications to k-uniform directed hypergraph

Directed hypergraphs have found applications in imaging processing [8], optical network communications [15], computer science and combinatorial optimization [10]. However, unlike spectral theory of undirected hypergraphs, there are very few results in spectral theory of directed hypergraphs.

A directed hypergraph \(\overrightarrow{\mathcal{H}}\) is a pair \((V(\overrightarrow{\mathcal{H}}), E(\overrightarrow{\mathcal{H}}))\), where \(V(\overrightarrow{\mathcal{H}})=[n]\) is the set of vertices and \(E(\overrightarrow{\mathcal{H}})=\{e_{1}, e_{2}, \ldots, e_{m}\}\) is the set of arcs. An arc \(e\in E(\overrightarrow{\mathcal{H}})\) is a pair \(e=(j_{1},e(j_{1}))\), where \(e(j_{1})=\{j_{2},\ldots,j_{t}\}\), \(j_{l}\in V(\overrightarrow{\mathcal{H}})\), and \(j_{l}\neq j_{h}\) if \(l\neq h\) for \(l, h\in[t]\) and \(t\in[n]\). The vertex \(j_{1}\) is called the tail (or out-vertex) and every other vertex \(j_{2},\ldots,j_{t}\) is called a head (or in-vertex) of the arc e. The out-degree of a vertex \(j\in V(\overrightarrow{\mathcal{H}})\) is defined as \(d_{j}^{+}=|E_{j}^{+}|\), where \(E_{j}^{+}=\{e\in E(\overrightarrow{\mathcal{H}}): j \mbox{ is the tail of } e\}\). If for any \(j\in V(\overrightarrow{\mathcal{H}})\), the degree \(d_{j}^{+}\) has the same value d, then \(\overrightarrow{\mathcal{H}}\) is called a directed d-out-regular hypergraph.

For a vertex \(i\in V(\overrightarrow{\mathcal{H}})\), we denote by \(E_{i}\) the set of arcs containing the vertex i, i.e., \(E_{i}=\{e\in E(\overrightarrow{\mathcal{H}}): i\in e\}\). Two distinct vertices i and j are weak-connected if there is a sequence of arcs \((e_{1},\ldots,e_{t})\) such that \(i\in e_{1}\), \(j\in e_{t}\), and \(e_{r}\cap e_{r+1}\neq\emptyset\) for all \(r\in[t-1]\). Two distinct vertices i and j are strong-connected, denoted by \(i\rightarrow j\), if there is a sequence of arcs \((e_{1},\ldots,e_{t})\) such that i is the tail of \(e_{1}\), j is a head of \(e_{t}\), and a head of \(e_{r}\) is the tail of \(e_{r+1}\) for all \(r\in[t-1]\). A directed hypergraph is called weakly connected if every pair of different vertices of \(\overrightarrow{\mathcal{H}}\) is weak-connected. A directed hypergraph is called strongly connected if every pair of different vertices i and j of \(\overrightarrow{\mathcal{H}}\) satisfies \(i\rightarrow j\) and \(j\rightarrow i\).

Similar to the definition of a k-uniform hypergraph, we define a k-uniform directed hypergraph as follows: A directed hypergraph \(\overrightarrow{\mathcal{H}}=(V(\overrightarrow {\mathcal{H}}), E(\overrightarrow{\mathcal{H}}))\) is called a k-uniform directed hypergraph if \(|e|=k\) for any arc \(e\in E(\overrightarrow{\mathcal{H}})\). When \(k=2\), then \(\overrightarrow{\mathcal{H}}\) is an ordinary digraph.

The following definition for the adjacency tensor and signless Laplacian tensor of a directed hypergraph was proposed by Chen and Qi in [5].

Definition 4.1

([5])

Let \(\overrightarrow{\mathcal{H}}=(V(\overrightarrow{\mathcal{H}}), E(\overrightarrow{\mathcal{H}}))\) be a k-uniform directed hypergraph. The adjacency tensor of the directed hypergraph \(\overrightarrow {\mathcal{H}}\) is defined as the order k dimension n tensor \(\mathbb {A}(\overrightarrow{\mathcal{H}})\), whose \((i_{1}i_{2}\cdots i_{k})\)-entry is

$$\bigl(\mathbb{A}(\overrightarrow{\mathcal{H}}) \bigr)_{i_{1}\cdots i_{k}}=\textstyle\begin{cases} \frac{1}{(k-1)!}, & \mbox{if } (i_{1},e(i_{1}))\in E(\overrightarrow{\mathcal{H}}) \mbox{ and } e(i_{1})=(i_{2},\ldots ,i_{k}),\\ 0, & \mbox{otherwise} . \end{cases} $$

Let \(\mathbb{D}(\overrightarrow{\mathcal{H}})\) be an order k dimension n diagonal tensor with its diagonal entry \(d_{ii\cdots i}\) being \(d_{i}^{+}\), the out-degree of vertex i, for all \(i\in V(\overrightarrow{\mathcal{H}})=[n]\). Then \(\mathbb{Q}(\overrightarrow{\mathcal{H}})=\mathbb{D(\overrightarrow {\mathcal{H}})}+ \mathbb{A(\overrightarrow{\mathcal{H}})} \) is the signless Laplacian tensor of the directed hypergraph \(\overrightarrow {\mathcal{H}}\).

Clearly, the adjacency tensor and the signless Laplacian tensor of a k-uniform directed hypergraph \(\overrightarrow{\mathcal{H}}\) are nonnegative k-uniform tensors, but not symmetric in general. For any \(1\leq i\leq n\), we have

$$r_{i} \bigl(\mathbb{A(\overrightarrow{\mathcal{H}})} \bigr)=\sum _{i_{2},\ldots, i_{k}=1}^{n} \bigl(\mathbb{A(\overrightarrow{ \mathcal{H}})} \bigr)_{ii_{2}\cdots i_{k}} =d_{i}^{+} $$

and

$$r_{i} \bigl(\mathbb{Q(\overrightarrow{\mathcal{H}})} \bigr)=\sum _{i_{2},\ldots, i_{k}=1}^{n} \bigl(\mathbb{Q(\overrightarrow{ \mathcal{H}})} \bigr)_{ii_{2}\cdots i_{k}} =2d_{i}^{+}. $$

The following statement is an alternative explanation of weak irreducibility.

Definition 4.2

([9, 12])

Suppose that \(\mathbb{A}=(a_{i_{1}i_{2}\ldots i_{k}})_{1\le i_{j}\le n (j=1, \ldots, k)}\) is a nonnegative tensor of order k and dimension n. We call a nonnegative matrix \(G(\mathbb{A})\) the representation associated matrix to the nonnegative tensor \(\mathbb{A}\) if the \((i, j)\)th entry of \(G(\mathbb{A})\) is defined to be the summation of \(a_{ii_{2}\ldots i_{k}}\) with indices \(\{i_{2},\ldots, i_{k}\}\ni j\). We call the tensor \(\mathbb {A}\) weakly reducible if its representation \(G(\mathbb{A})\) is a reducible matrix.

Let \(A=(a_{ij})\) be a nonnegative square matrix of order n. The associated digraph \(D(A)=(V,E)\) of A (possibly with loops) is defined to be the digraph with vertex set \(V=\{1,2,\ldots,n\}\) and arc set \(E=\{(i,j)\mid a_{ij}>0\}\).

Now we give a characterization of a strongly connected k-uniform directed hypergraph.

Theorem 4.3

Let\(\overrightarrow{\mathcal{H}}\)be ak-uniform directed hypergraph, \(\mathbb{A}=\mathbb{A}(\overrightarrow{\mathcal {H}})=(a_{i_{1}i_{2}\cdots i_{k}})\)be the adjacency tensor of\(\overrightarrow{\mathcal{H}}\), \(G(\mathbb{A})\)be the representation associated matrix of\(\mathbb{A}\), and\(D(G(\mathbb{A}))\)be the associated directed graph of\(G(\mathbb{A})\). Then the following four conditions are equivalent:

  1. (i)

    \(\mathbb{A}\)is weakly irreducible.

  2. (ii)

    \(G(\mathbb{A})\)is irreducible.

  3. (iii)

    \(D(G(\mathbb{A}))\)is strongly connected.

  4. (iv)

    \(\overrightarrow{\mathcal{H}}\)is strongly connected.

Proof

By Proposition 15 in [27] and \(\mathbb{A}=\mathbb {A}(\overrightarrow{\mathcal{H}})\) is a nonnegative tensor, we have (i) (ii) (iii). Now we show (iii) (iv).

(iii) (iv): Let \(D(G(\mathbb{A}))\) is strongly connected, now we show \(\overrightarrow{\mathcal{H}}\) is strongly connected.

For any \(i,j\in V(\overrightarrow{\mathcal{H}})=V(D(G(\mathbb{A})))\), there exists a directed path P from i to j in \(D(G(\mathbb{A}))\) by \(D(G(\mathbb{A}))\) being strongly connected. We assume \(P=ij_{1}j_{2}\cdots j_{t}j\), then \((i,j_{1}),(j_{1},j_{2}),\ldots , (j_{t},j)\in E(D(G(\mathbb{A})))\), which implies \(\sum_{j_{1}\in\{i_{2},\ldots, i_{k}\}} a_{ii_{2}\cdots i_{k}}>0\), \(\sum_{j_{2}\in\{i_{2},\ldots, i_{k}\}} a_{j_{1}i_{2}\cdots i_{k}}>0\), …, \(\sum_{j_{t}\in\{i_{2},\ldots, i_{k}\}} a_{j_{t-1}i_{2}\cdots i_{k}}>0\), and \(\sum_{j\in\{i_{2},\ldots, i_{k}\}} a_{j_{t}i_{2}\cdots i_{k}}>0\), thus there exists a sequence of arcs (\(e_{1},e_{2},\ldots,e_{t},e_{t+1}\)), where \(e_{l} \in\overrightarrow{\mathcal{H}}\) and \(l\in[t+1]\), such that i is the tail of \(e_{1}\), \(j_{1}\) is a head of \(e_{1}\), \(j_{l}\) is the tail of \(e_{l+1}\), \(j_{l+1}\) is a head of \(e_{l+1}\) for \(1\leq l\leq t-1\), \(j_{t}\) is the tail of \(e_{t+1}\), j is a head of \(e_{t+1}\), say, \(i\rightarrow j\) in \({\mathcal{H}}\). Therefore \(\overrightarrow{\mathcal {H}}\) is strongly connected.

(iv) (iii): Let \(\overrightarrow{\mathcal{H}}\) be strongly connected. Now we show that \(D(G(\mathbb{A}))\) is strongly connected.

For any \(i,j\in V(D(G(\mathbb{A})))=V(\overrightarrow{\mathcal{H}})\), \(i\rightarrow j\) in \(\overrightarrow{\mathcal{H}}\) by \(\overrightarrow {\mathcal{H}}\) being strongly connected, say, there exists a sequence of arcs (\(e_{1},e_{2},\ldots,e_{t},e_{t+1}\)), where \(e_{l} \in\overrightarrow {\mathcal{H}}\) for \(l\in[t+1]\), such that i is the tail of \(e_{1}\), j is a head of \(e_{t+1}\), and a head of \(e_{r}\) is the tail of \(e_{r+1}\) for all \(r\in[t]\). We assume that \(j_{r}\) is the tail of \(e_{r+1}\) and a head of \(e_{r}\) for all \(r\in[t]\), then \(\sum_{j_{1}\in\{i_{2},\ldots, i_{k}\}} a_{ii_{2}\cdots i_{k}}>0\), \(\sum_{j_{r+1}\in\{i_{2}, \ldots, i_{k}\}} a_{j_{r}i_{2}\cdots i_{k}}>0\) for \(1\leq r\leq t-1\), and \(\sum_{j\in\{i_{2},\ldots, i_{k}\}} a_{j_{t}i_{2}\cdots i_{k}}>0\). Thus \((i,j_{1})\in E(D(G(\mathbb{A})))\), \((j_{r},j_{r+1})\in E(D(G(\mathbb {A})))\) for \(1\leq r\leq t-1\) and \((j_{t},j)\in E(D(G(\mathbb{A})))\), which implies that there exists a walk \(ij_{1}j_{2}\cdots j_{t}j\) in \(D(G(\mathbb{A}))\). Therefore \(D(G(\mathbb{A}))\) is strongly connected. □

Recently, several papers studied the spectral radii of the adjacency tensor \(\mathbb{A}(\overrightarrow{\mathcal{H}})\) and the signless Laplacian tensor \(\mathbb{Q}(\overrightarrow{\mathcal {H}})\) of a k-uniform directed hypergraph \(\overrightarrow{\mathcal{H}}\) (see [5, 24] and so on).

Let \(\overrightarrow{\mathcal{H}}\) be a k-uniform directed hypergraph. If \(\overrightarrow{\mathcal{H}}\) is strongly connected, then by Theorem 4.3 and the above definitions, \(\mathbb{A}(\overrightarrow{\mathcal{H}})\) and thus \(\mathbb {Q}(\overrightarrow{\mathcal{H}})\) are weakly irreducible. Thus we can apply Theorem 2.1 to the adjacency tensor \(\mathbb {A}(\overrightarrow{\mathcal{H}})\) and the signless Laplacian tensor \(\mathbb{Q}(\overrightarrow{\mathcal{H}})\) of a (strongly connected) k-uniform directed hypergraph \(\overrightarrow{\mathcal{H}}\). If \(k= 2\), we obtain Theorem 2.7 in [11]. If \(k\geq3\), we obtain some new results about the bounds of \(\rho (\mathbb{A}(\overrightarrow{\mathcal{H}}))\) and \(\rho(\mathbb {Q}(\overrightarrow{\mathcal{H}}))\) as follows.

Theorem 4.4

Let\(k\geq3\), \(\overrightarrow{\mathcal{H}}\)be ak-uniform directed hypergraph with out-degree sequence\(d_{1}^{+}\geq\cdots\geq d_{n}^{+}\), \(\mathbb{ A}(\overrightarrow{\mathcal{H}})\)be the adjacency tensor of\(\overrightarrow{\mathcal{H}}\). Let\(A_{1}=\frac{1}{k-1}\binom {n-2}{k-2}\), \(\phi_{1}=d_{1}^{+}\), and

$$ \phi_{s}=\frac{1}{2} \Biggl\{ d_{s}^{+}-A_{1}+ \sqrt { \bigl(d_{s}^{+}+A_{1} \bigr)^{2}+4A_{1} \sum_{t=1}^{s-1} \bigl(d_{t}^{+}-d_{s}^{+} \bigr)} \Biggr\} $$
(4.1)

for\(2\leq s\leq n\). Then

$$\begin{aligned} \rho \bigl(\mathbb{A}(\overrightarrow{\mathcal{H}}) \bigr)&\leq \min_{1\leq s\leq n}\phi_{s}. \end{aligned}$$
(4.2)

Moreover, if\(\overrightarrow{\mathcal{H}}\)is a strongly connectedk-uniform directed hypergraph, then the equality in (4.2) holds if and only if\(d_{1}^{+}=d_{2}^{+}=\cdots=d_{n}^{+}\).

Proof

Let \(\mathbb{A}=\mathbb{A}(\overrightarrow{\mathcal{H}})\). We apply Theorem 2.1 to \(\mathbb{A}(\overrightarrow{\mathcal{H}})\), then we have \(M=0\), \(N=\frac{1}{(k-1)!}\), \(r_{i}=d^{+}_{i}\) for \(1\leq i\leq n\), \(A_{1}=N_{1}\), and (4.1) is from (2.1). Thus (4.2) holds by Theorem 2.1, and the equality in (4.2) holds if and only if \(d_{1}^{+}=d_{2}^{+}=\cdots=d_{n}^{+}\) by Theorem 2.1 and Theorem 4.3. □

Theorem 4.5

Let\(k\geq3\), \(\overrightarrow{\mathcal{H}}\)be ak-uniform directed hypergraph with out-degree sequence\(d_{1}^{+}\geq\cdots\geq d_{n}^{+}\), \(\mathbb{Q}(\overrightarrow{\mathcal{H}})\)be the signless Laplacian tensor of\(\overrightarrow{\mathcal{H}}\). Let\(A_{1}=\frac {1}{k-1}\binom{n-2}{k-2}\), \(\psi_{1}=2d_{1}^{+}\), and

$$ \psi_{s}=\frac{1}{2} \Biggl\{ 2d_{s}^{+}+d_{1}^{+}-A_{1}+ \sqrt { \bigl(2d_{s}^{+}-d_{1}^{+}+A_{1} \bigr)^{2}+8A_{1}\sum_{t=1}^{s-1} \bigl(d_{t}^{+}-d_{s}^{+} \bigr)} \Biggr\} $$
(4.3)

for\(2\leq s\leq n\). Then

$$ \rho \bigl(\mathbb{Q}(\overrightarrow{\mathcal{H}}) \bigr)\leq \min_{1\leq s\leq n}\psi_{s}. $$
(4.4)

Moreover, if\(\overrightarrow{\mathcal{H}}\)is a strongly connectedk-uniform directed hypergraph, then the equality in (4.4) holds if and only if\(d_{1}^{+}=d_{2}^{+}=\cdots=d_{n}^{+}\).

Proof

Let \(\mathbb{A}=\mathbb{Q}(\overrightarrow{\mathcal{H}})\). We apply Theorem 2.1 to \(\mathbb{Q}(\overrightarrow{\mathcal{H}})\), then we have \(M=d_{1}^{+}\), \(N=\frac{1}{(k-1)!}\), \(r_{i}=2d_{i}^{+}\) for \(1\leq i\leq n\), \(A_{1}=N_{1}\), and (4.3) is from (2.1). Thus (4.4) holds by Theorem 2.1, and the equality in (4.4) holds if and only if \(d_{1}^{+}=d_{2}^{+}=\cdots=d_{n}^{+}\) by Theorem 2.1 and Theorem 4.3. □

Abbreviations

P.R. China:

People’s Republic of China

MOE-LSC:

Key Laboratory of Scientific and Engineering Computing (Ministry of Education)

SHL-MAC:

Shanghai municipal education commission key laboratory of multi-physics modeling analysis and computation

Grant Nos:

Grant Numbers

Grant No:

Grant Number

i.e.:

id est

Commun. Math. Sci.:

Communications in Mathematical Sciences

Front. Math. China:

Frontiers of Mathematics in China

J. Ind. Manag. Optim.:

Journal of Industrial and Management Optimization

Linear Algebra Appl.:

Linear Algebra and Its Applications

Discrete Appl. Math.:

Discrete Applied Mathematics

Sci. China Math.:

Science China-Mathematics

Inform. Process. Lett.:

Information Processing Letters

Numer. Math.:

Numerische Mathematik

IEEE:

: Institute of Electrical and Electronics Engineers

CAMSAP:

Computational Advances in Multi-Sensor Adaptive Processing

Appl. Math. Comput.:

Applied Mathematics and Computation

Graphs Combin.:

: Graphs and Combinatorics

J. Symbolic Comput.:

: Journal of Symbolic Computation

SIAM J. Matrix Anal. Appl.:

: SIAM Journal on Matrix Analysis and Applications.

References

  1. 1.

    Berge, C.: Hypergraph. Combinatorics of Finite Sets, 3rd edn. North-Holland, Amsterdam (1973)

  2. 2.

    Brualdi, R.A.: Introductory Combinatorics, 3rd edn. China Machine Press, Beijing (2002)

  3. 3.

    Chang, K.C., Pearson, K., Zhang, T.: Perron–Frobenius theorem for nonnegative tensors. Commun. Math. Sci. 6(2), 507–520 (2008)

  4. 4.

    Chen, D.M., Chen, Z.B., Zhang, X.D.: Spectral radius of uniform hypergraphs and degree sequences. Front. Math. China 6, 1279–1288 (2017)

  5. 5.

    Chen, Z.M., Qi, L.Q.: Circulant tensors with applications to spectral hypergraph theory and stochastic process. J. Ind. Manag. Optim. 12(4), 1227–1247 (2016)

  6. 6.

    Cooper, J., Dutle, A.: Spectral of uniform hypergraph. Linear Algebra Appl. 436, 3268–3292 (2012)

  7. 7.

    Duan, X., Zhou, B.: Sharp bounds on the spectral radius of a nonnegative matrix. Linear Algebra Appl. 439, 2961–2970 (2013)

  8. 8.

    Ducournau, A., Bretto, A.: Random walks in directed hypergraphs and application to semi-supervised image segmentation. Comput. Vis. Image Underst. 120, 91–102 (2014)

  9. 9.

    Friedland, S., Gaubert, A., Han, L.: Perron–Frobenius theorems for nonnegative multilinear forms and extensions. Linear Algebra Appl. 438, 738–749 (2013)

  10. 10.

    Gallo, G., Longo, G., Pallottino, S., Nguyen, S.: Directed hypergraphs and applications. Discrete Appl. Math. 42, 177–201 (1993)

  11. 11.

    Hong, W.X., You, L.H.: Spectral radius and signless Laplacian spectral radius of strongly connected digraphs. Linear Algebra Appl. 457, 93–113 (2014)

  12. 12.

    Hu, S.L., Huang, Z.H., Qi, L.Q.: Strictly nonnegative tensors and nonnegative tensor partition. Sci. China Math. 57(1), 181–195 (2014)

  13. 13.

    Khan, M., Fan, Y.: On the spectral radius of a class of non-odd-bipartite even uniform hypergraphs. Linear Algebra Appl. 480, 93–106 (2015)

  14. 14.

    Li, C.Q., Li, Y.T., Kong, X.: New eigenvalue inclusion sets for tensors. Numer. Linear Algebra Appl. 21(1), 39–50 (2014)

  15. 15.

    Li, K., Wang, L.S.: A polynomial time approximation scheme for embedding a directed hypergraph on a ring. Inf. Process. Lett. 97, 203–207 (2006)

  16. 16.

    Li, W., Michael, K.N.: Some bounds for the spectral radius of nonnegative tensors. Numer. Math. 130(2), 315–335 (2015)

  17. 17.

    Lim, L.H.: Singular values and eigenvalues of tensors: a variational approach. In: Proceedings of the 1st IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP ’05), pp. 129–132 (2005)

  18. 18.

    Lim, L.H.: Foundations of numerical multilinear algebra: decomposition and approximation of tensors. Dissertation (2007)

  19. 19.

    Lin, H.Y., Mo, B., Zhou, B., Weng, W.: Sharp bounds for ordinary and signless Laplacian spectral radii of uniform hypergraphs. Appl. Math. Comput. 285, 217–227 (2016)

  20. 20.

    Pearson, K., Zhang, T.: On spectral hypergraph theory of the adjacency tensor. Graphs Comb. 30(5), 1233–1248 (2014)

  21. 21.

    Qi, L.Q.: Eigenvalues of a real supersymmetric tensor. J. Symb. Comput. 40, 1302–1324 (2005)

  22. 22.

    Shao, J.Y.: A general product of tensors with applications. Linear Algebra Appl. 439, 2350–2366 (2013)

  23. 23.

    Shao, J.Y., Shan, H.Y., Zhang, L.: On some properties of the determinants of tensors. Linear Algebra Appl. 439, 3057–3069 (2013)

  24. 24.

    Xie, J.S., Qi, L.Q.: Spectral directed hypergraph theory via tensors. Linear Multilinear Algebra 64(4), 780–794 (2016)

  25. 25.

    Yang, Y.N., Yang, Q.Z.: Further results for Perron–Frobenius theorem for nonnegative tensors. SIAM J. Matrix Anal. Appl. 31(5), 2517–2530 (2010)

  26. 26.

    Yang, Y.N., Yang, Q.Z.: On some properties of nonnegative weakly irreducible tensors (2011). arXiv:1111.0713v2

  27. 27.

    You, L.H., Huang, X.H., Yuan, X.Y.: Sharp bounds for spectral radius of nonnegative weakly irreducible tensors. Front. Math. China 14(5), 989–1015 (2019)

  28. 28.

    Yuan, X.Y., Zhang, M., Lu, M.: Some upper bounds on the eigenvalues of uniform hypergraphs. Linear Algebra Appl. 484, 540–549 (2015)

Download references

Acknowledgements

The authors would like to thank the referees for their valuable comments, corrections, and suggestions, which lead to an improvement of the original manuscript.

Availability of data and materials

Not applicable in this work.

Funding

The research is supported by the National Natural Science Foundation of China (Grant Nos. 11971180, 11571123, 11531001), the Guangdong Provincial Natural Science Foundation (Grant No. 2019A1515012052), the Montenegrin-Chinese Science and Technology Cooperation Project (No. 3-12).

Author information

All authors contributed equally to this work. All authors read and approved the final manuscript.

Correspondence to Lihua You.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Lv, C., You, L. & Zhang, X. A sharp upper bound on the spectral radius of a nonnegative k-uniform tensor and its applications to (directed) hypergraphs. J Inequal Appl 2020, 32 (2020). https://doi.org/10.1186/s13660-020-2305-2

Download citation

MSC

  • 05C65
  • 15A69

Keywords

  • Uniform tensors
  • Uniform (directed) hypergraphs
  • Spectral radius
  • Adjacency
  • Signless Laplacian